DBD::SQLite - Self Contained RDBMS in a DBI Driver


NAME

DBD::SQLite - Self Contained RDBMS in a DBI Driver


SYNOPSIS

  use DBI;
  my $dbh = DBI->connect("dbi:SQLite:dbname=dbfile","","");


DESCRIPTION

SQLite is a public domain RDBMS database engine that you can find at http://www.hwaci.com/sw/sqlite/.

Rather than ask you to install SQLite first, because SQLite is public domain, DBD::SQLite includes the entire thing in the distribution. So in order to get a fast transaction capable RDBMS working for your perl project you simply have to install this module, and nothing else.

SQLite supports the following features:

Implements a large subset of SQL92
See http://www.hwaci.com/sw/sqlite/lang.html for details.

A complete DB in a single disk file
Everything for your database is stored in a single disk file, making it easier to move things around than with DBD::CSV.

Atomic commit and rollback
Yes, DBD::SQLite is small and light, but it supports full transactions!

There's lots more to it, but this is early development stages, so please refer to the docs on the SQLite web page, listed above, for SQL details.


API

The API works exactly as every DBI module does. Please see the DBI manpage for more details.

$dbh->func('last_insert_rowid')

This method returns the last inserted rowid. If you specify an INTEGER PRIMARY KEY as the first column in your table, that is the column that is returned. Otherwise, it is the hidden ROWID column. See the sqlite docs for details.


NOTES

To access the database from the command line, try using dbish which comes with the DBI module. Just type:

  dbish dbi:SQLite:foo.db

On the command line to access the file foo.db.

Alternatively you can install SQLite from the link above without conflicting with DBD::SQLite and use the supplied sqlite command line tool.


PERFORMANCE

SQLite is fast, very fast. I recently processed my 72MB log file with it, inserting the data (400,000+ rows) by using transactions and only committing every 1000 rows (otherwise the insertion is quite slow), and then performing queries on the data.

Queries like count(*) and avg(bytes) took fractions of a second to return, but what surprised me most of all was:

  SELECT url, count(*) as count FROM access_log
    GROUP BY url
    ORDER BY count desc
    LIMIT 20

To discover the top 20 hit URLs on the site (http://axkit.org), and it returned within 2 seconds. I'm seriously considering switching my log analysis code to use this little speed demon!

Oh yeah, and that was with no indexes on the table, on a 400MHz PIII.

For best performance be sure to tune your hdparm settings if you are using linux. Also you might want to set:

  PRAGMA default_synchronous = OFF

Which will prevent sqlite from doing fsync's when writing, which will slow down non-transactional writes significantly, at the expense of some piece of mind. Also try playing with the cache_size pragma.


BUGS

Likely to be many, please use http://rt.cpan.org/ for reporting bugs.


AUTHOR

Matt Sergeant, matt@sergeant.org


SEE ALSO

the DBI manpage.

 DBD::SQLite - Self Contained RDBMS in a DBI Driver