Skip the navigation

Can't afford a Database Machine? Oracle pushes compression as less lavish scale-up method

At OpenWorld, Oracle puts spotlight on 11g database's Advanced Compression feature

By Eric Lai
September 29, 2008 12:00 PM ET

Computerworld - Oracle Corp.'s powerful new HP Oracle Database Machine comes with 168TB of storage, a new method of retrieving data more quickly and intelligently, and — wait for it — a $2.33 million price tag.

It's the turbocharged option for the database administrator with money to burn and a need for speed.

But most administrators don't get to drive in the fast lane — especially not with IT budgets the way they are. So as a less lavish option for enterprise users, Oracle is touting another approach.

That one involves data compression, which has long been a popular way to save storage space and money. Traditionally, though, the trade-off has been high: gobs of memory and processing power are typically needed to compress data and write it to disks. Even more is needed when the information is later extracted.

Now Oracle claims to have solved this thorny problem with a feature it first introduced in its Oracle 11g database, which was released last year.

By using the Advanced Compression option in 11g, Oracle says, administrators can shrink database sizes by as much as 75% and boost read/write speeds by three to four times, no matter whether they're running a data warehouse or a transaction-processing database — all while incurring little in the way of processor-utilization penalties.

Oracle claims the storage and speed gains are so dramatic that companies using Advanced Compression will no longer need to move old, seldom or unused data to archives. Instead, they can keep it all in the same production database, even as the amount of data stored there grows into the hundreds of terabytes or even the petabyte range.

"This works completely transparently to your applications," Juan Loaiza, Oracle's senior vice president of systems technologies, said during a session at the company's OpenWorld conference in San Francisco last week. "It increases CPU usage by just 5%, while cutting your [database] table sizes by half."

Oracle says it's responding to the demands of enterprise customers with fast-growing databases (download PDF). "The envelope is always being pushed," Loaiza said. "Unstructured data is growing very quickly. We expect someone to be running a 1 petabyte, 1,000-CPU-core database by 2010."

It's also responding to the fact that storage technology, one of the keys to database performance, has made little progress from a speed standpoint, according to Loaiza. "Disks are getting bigger, but they're not getting a whole lot faster," he said.

Taking data compression down to the block level

Oracle has offered simple index-level compression since the 8i version of its database was introduced in 1999. That improved several years later with the introduction of table-level compression in Oracle 9i Release 2, which helped data warehousing users compress data for faster bulk loads, according to Sushil Kumar, senior director of product management for database manageability, high availability and performance at Oracle.



Our Commenting Policies
Internet of Things: Get the latest!
Internet of Things

Our new bimonthly Internet of Things newsletter helps you keep pace with the rapidly evolving technologies, trends and developments related to the IoT. Subscribe now and stay up to date!