Postpone hardware upgrade by employing CSQL Cache September 27, 2009Posted by Prabakaran Thirumalai in csqlcache.
Tags: csqlcache, database cache, middle tier cache, mysql cache, scalability, transaction cache
add a comment
With the speed of business increasing, and the volume of information that enterprises must process growing as well, businesses in many industry domains transition to real time data management in order to stay competitive.
Though there is huge demand for speed, enterprises are reluctant to migrate their applications, as they do not want to give up the existing database systems they are using for many years that are proven stable in their environment. By employing caching for frequently accessed tables at the application tier, application can reduce load on the backend databases and reduce network calls, resulting in very high throughput.
Enterprises shall postpone hardware upgrade (more processors or more machines replicating data) for data management by 20 to 30 times by employing transparent caching for their existing database.
CSQL Cache is generic database caching platform to cache frequently accessed tables from your existing open source or commercial database management system (Oracle, DB2, Sybase, MySQL, Postgres, etc) close to application tier. It uses the fastest Main Memory Database (CSQL
MMDB) designed for high performance and high volume data computing to cache the table and enables real time applications to provide faster and predictive response time with high throughput.
One of the main advantage of CSQL over other caching mechanism is that the caching is transparent to the application and CSQL allows updates on the cached data which are automatically propagated to the actual database. It also allows application to cache partial records or partial fields from the actual table.
For More Information, visit
CSQL Cache VS Object Caching Techniques August 15, 2009Posted by Prabakaran Thirumalai in csqlcache.
Tags: database cache, Table Cache
add a comment
Some of the shortcomings of object caching techniques are given below
1. Object Cache is suitable for read only workload
2. Serialization and De serialization operations are slow (especially in Java)
3. Grouping and Aggregation operations on set of related objects are very slow
4. Update on cache should be applied to database by application explicitly
5. Direct updates on database does not propagate to cache automatically
6. Every cache hit involves network overhead even if the cache resides in same machine (in case of memcached)
CSQL Cache provides transparent caching of complete or partial tables from database allowing applications to perform any SQL operations on the cached tables. It can also be used to store temporary data such as session information in MMDB allowing applications to scale.
For more information on CSQL Cache visit,
Accelerate MySQL with CSQL MMDB April 15, 2009Posted by Prabakaran Thirumalai in cache, csqlcache.
Tags: database cache, mysql cache
add a comment
CSQL , main memory database engine provides transparent caching for MySQL databases with no or minimal application code changes. CSQL MMDB is 20-30X faster than disk based databases. By caching data close to the application using CSQL MMDB, it reduces the network latency and provides unprecendented performance for data access.
Main Memory Databases are times faster than traditional disk based database management systems such as Oracle, Sybase, MySQL, Postgres, etc. This increase in performance is not due the fact that it reduces the I/O cycles to fetch data from disk to memory. Even when disk based DBMS places all data in the buffer cache, they are slow because of its inherent data structure and access algorithm design based on disk based data storage.
For more information visit
Requirements of good data caching solution April 13, 2009Posted by Prabakaran Thirumalai in cache, csqlcache.
Tags: database cache
1 comment so far
This article outlines the features of good data caching solution for read and write intensive applications
Updateable Cache Tables
Most of the existing cache solutions are read only which limits their usage to small segment of the applications, ie non-real time applications.
For updateable caches, updates, which happen in cache, should be propagated to the target database and any updates that happen directly on the target database should come to cache automatically.
Synchronous and Asynchronous update propagation
The updates on cache table shall be propagated to target database in two modes. Synchronous mode makes sure that after the database operation completes the updates are applied at the target database as well. In case of Asynchronous mode the updates are delayed to the target database.
Synchronous mode gives high cache consistency and is suited for real time applications. Asynchronous mode gives high throughput and is suited for near real time applications.
Multiple cache granularity: Database level, Table level and Result-set caching
Major portions of corporate databases are historical and infrequently accessed. But, there is some information that should be instantly accessible like premium customer’s data, etc
Recovery for cached tables
Incase of system or power failure, during the restart of caching platform all the committed transactions on the cached tables should be recovered.
Tools to validate the coherence of cache
In case of asynchronous mode of update propagation, cache at different cache nodes and target database may diverge. This needs to be resolved manually and the caching solution should provide tools to identify the mismatches and take corrective measures if required.
Clustering is employed in many solutions to increase the availability and to achieve load balancing. Caching platform should work in a clustered environment spanning to multiple nodes thereby keeping the cached data coherent across nodes.
Transparent access to non-cached tables reside in target database
Database Cache should keep track of queries and should be able to intelligently route to the database cache or to the origin database based on the data locality without any application code modification.
Transparent Fail over
There should not be any service outages, in case of caching platform failure. Client connections should be routed to the target database.
Minimal changes to application for adapting the caching solution
Support for standard interfaces JDBC, ODBC etc will make the application to work seamlessly without any application code changes. It should route all stored procedure calls to target database so that they don’t need to be migrated.
Scaling applications/servers to handle more load, April 1, 2009Posted by Prabakaran Thirumalai in cache, csqlcache.
Tags: cache, csql, database cache, mmdb, scalability
add a comment
With the speed of business increasing, and the volume of information that enterprises must process growing as well, businesses in many industry domains need to make transition to real time data management in order to stay competitive.
Though there is huge demand for speed, enterprises are reluctant to migrate their applications, as they do not want to give up the existing database systems they are using for many years that are proven stable in their environment.
CSQL Main memory database executes transactions 30 times faster than other leading disk based database management system.
CSQL Cache works in conjunction with existing database management system (MySQL, Postgres, Oracle , etc) and provides application flexibility to use feature rich existing database functionality and high performance CSQL MMDB based on the performance requirement on per table basis. By caching frequently accessed tables from existing database management system close to the application host, application can improve database throughput by 100 times.
Improves ROI by providing business applications process 1 million transaction in less than half a minute.
Seamlessly plugs into the existing architecture with no or minimal code changes
Reduces the network bandwidth and load on back end systems
No additional H/W to handle more load or more customers
For more information on product, visit the product web site
Product Web Site
Levels of Caching June 8, 2008Posted by Prabakaran Thirumalai in cache.
Tags: cache, database cache, Disk, HTTP Cache, L1 Cache, L2 Cache, Memory, Page Cache, RAM, Table Cache
A cache is a collection of data duplicating original values stored elsewhere or computed earlier, where the original data is expensive to fetch or to compute, compared to the cost of reading the cache. (Wiki)
Computers have several levels of caches to speed up the operation, including processor Cache, memory cache and disk cache. Caching can also be implemented for frequent accessed internet pages on the web server and for frequently accessed data(table) for databases. Cache technology is the use of a faster but smaller memory type to accelerate a slower but larger memory type.
When using a cache, you must check the cache to see if an item is in there. If it is there, it’s called a cache hit. If not, it is called a cache miss and the computer must wait for a round trip from the larger, slower memory area.
Levels of Caching
L1 cache is an abbreviation of Level 1 cache. It is also called as primary cache.
L1 cache is a small, fast memory cache that is built in to a CPU and helps speed access to important and frequently-used data. It is used for temporary storage of instructions and data organised in blocks of 32 bytes.
Write back and Write through cache: Write through happens when a processor writes data simultaneously into cache and into main memory (to assure coherency). Write back occurs when the processor writes to the cache and then proceeds to the next instruction. The cache holds the write-back data and writes it into main memory when that data line in cache is to be replaced. Write back offers about 10% higher performance than write-through, but cache that has this function is more costly. A third type of write mode, write through with buffer, gives similar performance to write back.
Speed: 5 cycles
Granularity : word length (64 bit or 128 bit)
Level 2 cache – also referred to as secondary cache is also present inside the processor.
Speed: 10 cycles
Granularity : word length (64 bit or 128 bit)
Principal level of system memory is referred to as main memory, or Random Access Memory (RAM).
Main memory is attached to the processor via its address and data buses. Each bus consists of a number of electrical circuits or bits. The width of the address bus dictates how many different memory locations can be accessed, and the width of the data bus how much information is stored at each location. Main memory is built up using DRAM chips, short for Dynamic RAM.
RAM is used as a cache for data that is initially loaded in from the hard disk (or other I/O storage systems).
Speed:5 to 50ns
Granularity :4 KB (page size)
Backend: Disk blocks
Speed: 5 millisecs
Backend: Distributed systems.
Speed: 1 sec
Granularity :Internet Pages
Backend: Pages on disk
Speed: 1 millisec for select
Granularity: Table, Result Set
Backend: Database connected via network
Tags: csqlcache, data cache, database cache, inmemory database, main memory database, middle tier cache, transaction cache
CSQL Cache, a high performance, bi-directional, updateable database caching infrastructure that sits between the clustered application process and back-end data sources to provide unprecedented high throughput to your application.
Improving Database Performance Using Database Cache
Many applications today are being developed and deployed on multi-tier environments that involve browser-based clients, web application servers and backend databases. These applications need to generate web pages on-demand by talking to backend databases because of their dynamic nature, making middle-tier database caching an effective approach to achieve high scalability and performance. Following are the advantages of database caching
Scalability: distribute query workload from backend to multiple cheap front-end systems.
Flexibility: achieve QoS, where each cache hosts different parts of the backend data, e.g., the data of Platinum customers are cached while that of ordinary customers is not.
Availability: by continued service for applications that depend only on cached tables even if the backend server is unavailable.
Performance: by potentially responding fast because of locality of data and smoothing out load peaks by avoiding round-trips between middle-tier and data-tier
In order to overcome the throughput barrier, application scales by deploying multiple small systems instead of one single huge system. Companies have developed various homegrown solutions that involve database caching to scale up their applications. These caching solutions can help accelerate database performance to some extent, but they are fairly ineffective as most of them support only result set caching and some are poor at dealing with the scalability. Some of these caching solutions use heavy weight, full-fledged database management system to cache the data at the middle-tier, which yields less performance gain. These caching solutions are mostly read only and some do provide tools for doing manual lazy updates. For frequently changing data, it will be holding “dirty” cached data, resulting in long latency periods that may be entirely unacceptable for applications requiring immediate access to current data.
For complete set of features supported by CSQL cache, refer the data sheet on the product web site. http://www.csqldb.com