jump to navigation

Cache MySQL table to CSQL April 28, 2009

Posted by Prabakaran Thirumalai in csqlcache.
Tags: ,
add a comment

Extreme speed and capabilities of CSQL will be available to MySQL customers, allowing them to process their growing data volumes faster than ever before. In today’s quick-stepped business environment, the ability to access, capture, analyze data in real time is increasingly becoming the source of competitive advantage for many companies and must run with very low response time and high throughput.

Although MySQL provides memory based storage engine in open-source and enterprise level to improve the throughput and performance for applications, CSQL compliments and provides a formidable 20-30 times faster performance in accordance with all types of queries which returns single record in standard Wisconsin Benchmark.

CSQL’s utmost speed is a vital resource for clients in many industries, such as healthcare, telecommunications companies, government, ticketing and reservation service providers, web retailers and capital market firms which require instant and reliable business information. To provide a more predictive response time to applications such as web collaboration, on-line mobile phone charging and stock trading, data from MySQL can be cached in CSQL to support peak workloads.

For more information visit, http://www.csqldb.com

Scaling applications/servers to handle more load, April 1, 2009

Posted by Prabakaran Thirumalai in csqlcache.
Tags: , , , ,
add a comment

With the speed of business increasing, and the volume of information that enterprises must process growing as well, businesses in many industry domains need to make transition to real time data management in order to stay competitive.
Though there is huge demand for speed, enterprises are reluctant to migrate their applications, as they do not want to give up the existing database systems they are using for many years that are proven stable in their environment.

CSQL Main memory database executes transactions 30 times faster than other leading disk based database management system.

CSQL Cache works in conjunction with existing database management system (MySQL, Postgres, Oracle , etc) and provides application flexibility to use feature rich existing database functionality and high performance CSQL MMDB based on the performance requirement on per table basis. By caching frequently accessed tables from existing database management system close to the application host, application can improve database throughput by 100 times.

Improves ROI by providing business applications process 1 million transaction in less than half a minute.
Seamlessly plugs into the existing architecture with no or minimal code changes
Reduces the network bandwidth and load on back end systems
No additional H/W to handle more load or more customers

For more information on product, visit the product web site

Product Web Site


Levels of Caching June 8, 2008

Posted by Prabakaran Thirumalai in Uncategorized.
Tags: , , , , , , , , ,
1 comment so far

A cache is a collection of data duplicating original values stored elsewhere or computed earlier, where the original data is expensive to fetch or to compute, compared to the cost of reading the cache. (Wiki)

Computers have several levels of caches to speed up the operation, including processor Cache, memory cache and disk cache. Caching can also be implemented for frequent accessed internet pages on the web server and for frequently accessed data(table) for databases. Cache technology is the use of a faster but smaller memory type to accelerate a slower but larger memory type.

When using a cache, you must check the cache to see if an item is in there. If it is there, it’s called a cache hit. If not, it is called a cache miss and the computer must wait for a round trip from the larger, slower memory area.

Levels of Caching

L1 Cache

L1 cache is an abbreviation of Level 1 cache. It is also called as primary cache.
L1 cache is a small, fast memory cache that is built in to a CPU and helps speed access to important and frequently-used data. It is used for temporary storage of instructions and data organised in blocks of 32 bytes.

Write back and Write through cache: Write through happens when a processor writes data simultaneously into cache and into main memory (to assure coherency). Write back occurs when the processor writes to the cache and then proceeds to the next instruction. The cache holds the write-back data and writes it into main memory when that data line in cache is to be replaced. Write back offers about 10% higher performance than write-through, but cache that has this function is more costly. A third type of write mode, write through with buffer, gives similar performance to write back.

Speed: 5 cycles

Granularity : word length (64 bit or 128 bit)

Backend:L2 Cache

L2 Cache

Level 2 cache – also referred to as secondary cache is also present inside the processor.

Speed: 10 cycles

Granularity : word length (64 bit or 128 bit)



Principal level of system memory is referred to as main memory, or Random Access Memory (RAM).

Main memory is attached to the processor via its address and data buses. Each bus consists of a number of electrical circuits or bits. The width of the address bus dictates how many different memory locations can be accessed, and the width of the data bus how much information is stored at each location. Main memory is built up using DRAM chips, short for Dynamic RAM.

RAM is used as a cache for data that is initially loaded in from the hard disk (or other I/O storage systems).

Speed:5 to 50ns

Granularity :4 KB (page size)

Backend: Disk blocks


Speed: 5 millisecs

Granularity :Files

Backend: Distributed systems.

HTTP Cache

Speed: 1 sec

Granularity :Internet Pages

Backend: Pages on disk

Database Cache

Speed: 1 millisec for select

Granularity: Table, Result Set

Backend: Database connected via network


Get every new post delivered to your Inbox.