Now, this is funny.
The so-called Active Record pattern has gained a lot of currency thanks to Ruby on Rails. According to Martin Fowler:
An object carries both data and behavior. Much of this data is persistent and needs to be stored in a database. Active Record uses the most obvious approach, putting data access logic in the domain object. This way all people know how to read and write their data to and from the database.The purpose of this post is not to discuss the merits or otherwise of this approach. One supposes that Ruby on Rails does something like what Fowler describes. It should be noted that some would suggest that this violates separation of concerns: a domain object—an object representing either a single database record or a complex of data which together models something like a real-world entity—should not be concerned with retrieving its data from, or writing its data to, a database.
I sort of agree with this, not least because every domain object has to carry around the baggage concerned with storage. One could simply have another object which performs the database transactions and reads data from the domain object for insertion to or updating the database or retrieves data from the database and sets the domain object's properties with its values. This is the approach that eZ Components takes.
Anyway, in thinking about this, I thought that I might learn something about database relations by "rolling my own". For my first attempt, I wrote a fairly simple MySQL database connection object and a statement object based on PDO. (I like PDO a lot.) Then I discovered a side project of Troels Knak-Nielsen, author of Konstrukt, called pdoext. It's a library for composing SQL. It still requires that you have a knowledge of SQL, but it does do some of the grunt work, and is especially useful when certain parts of an SQL query are composed at runtime.
I thought I'd implement composition of common relations such as:
- ManyToMany (what, in Rails parlance, is called "has and belongs to many")
However, there are some things I want my library to do which I've not seen elsewhere.
- It should spit out domain objects. This cuts out the "middle man" approach of using a gateway of some sort to create and then populate each domain object. PDO facilitates this behaviour with its PDO::FETCH_CLASS constant or PdoStatement::fetchObject() function. (Actually, recently I have seen this done. I just can't remember where!)
- It should allow selection of database fields in the query. Many Active Record implementations do something like "SELECT * FROM tablename". Many programmers and database administrators say that one should fetch only what one needs to reduce the amount of traffic between the application and the database server.
- It should facilitate some customization of queries (more on this in a later post). I'm sure there are some libraries that do this, so please don't flame me for this point.
Now, I've been working on this for some time, so rather that writing a series of posts about how I'm going (with all the trails and tribulations and changes of mind and so on), I'll write a series of posts that plot a brief history of my decisions. Code examples will be included, of course.
Last week, my institution held a music festival. There were 6 concerts in 5 days. Students and teachers from the TV course brought their portable hi-def studio and cameras. It was our job to record multitrack sessions but also provide them with a stereo mix.
The stereo mix was generated in Pro Tools in real time. We took a timecode feed from TV (sent over an analog cable run of 80 metres (amazing!), set the session start time, put Pro Tools "on-line" (receiving sync from timecode), and hit record. During one act, Pro Tools threw an error message. I didn't see it but my students said an error dialog flashed very quickly and recording stopped. They took Pro Tools off-line and tried to put it on-line again but the timecode reference jumped about 3 hours ahead. It was over a minute before they could get it all working again.
TV's timecode feed didn't miss a beat. The timecode display on the SYNC I/O box was apparently maintained.
I suppose for 6 concerts, each generating 11-12 GB of data, one drop-out is not too bad.
If you're reading this between August 16 and 26, 2008, then this entry is valid. If not, then forget it!
They didn't sell, so I've relisted them for a little bit less. Here's hoping....
Much of the time, I use an old IBM Thinkpad T30. I'm running Suse Linux 10.3 with KDE. I went with Suse because I got the full version of 10.0 for a song. Since then, I've upgraded with Opensuse. The only downside is that I can't get the modem working! That's not much of an issue because at home the ADSL works fine.
I also originally formatted the Linux partitions with reiserfs. If only I'd known that development of it was about to die. As it turned out, one of the contributing factors was that the author was charged with his wife's murder. I should have gone with ext2. Perhaps I'll re-install at some point.
Anyway, the whole thing runs pretty slowly with only 512MB of RAM and the KDE monster. I should go to at least 1GB. (How much RAM can it address, and what's the largest DDR2100 chip available?) I'm sure that would help a lot. The hard drive grinds away an enormous amount of time as, I presume, it writes to and reads from various caches and the swap partition. The delay can be a a significant proportion of a minute.
But, until I get off my backside and buy some more RAM, I thought I'd try switching to a different desktop. So I'm currently on Xfce. Login and logout times aren't different and the HD is still hit quite hard. It feels a little faster overall but there are still times when the hard drive grinds for a long time.