Earlier this year I did a quick little post about profiling the koha code . And I noticed today that Devel::NYTProf (which I used) has been updated, and now has a module for use under mod_perl. So that’s my next mission, once I shake off this cold.
Category: Geek
Too much weights, not enough speed work
Following on from testing Koha with memcached, I decided to test the opac with mod_perl, mod_expires and mod_deflate.
So with no mod_perl and no caching
time curl http://opac.koha.workbuffer.org/cgi-bin/koha/opac-search.pl?q=a
real 0m2.993s
And with mod_perl
time curl http://opac.koha.workbuffer.org/cgi-bin/koha/opac-search.pl?q=a
real 0m0.657s
And now opac main is down too
real 0m0.010s
This of course isn’t really testing mod_expires or mod_deflate, but certainly telling the browser to cache the images, css and javascript helps out a lot there too.
(Oh and if you haven’t seen the movie Once were Warriors, the title won’t make sense, i’m sure if you search on youtube you can find the clip)
Testing Koha with memcached
Ive been doing some work rewriting some of the scripts in Koha to use memcached where possible.
Heres some loadtesting on opac-main.pl using straight CGI and no caching.
Maximum connect burst length: 1
Total: connections 20 requests 20 replies 20 test-duration 39.896 s
Connection rate: 0.5 conn/s (1994.8 ms/conn, <=20 concurrent connections)
Connection time [ms]: min 35438.7 avg 37343.4 max 39782.2 median 36828.5 stddev 1409.9
Connection time [ms]: connect 0.9
Connection length [replies/conn]: 1.000
Request rate: 0.5 req/s (1994.8 ms/req)
Request size [B]: 64.0
Reply rate [replies/s]: min 0.0 avg 0.0 max 0.0 stddev 0.0 (7 samples)
Reply time [ms]: response 37135.8 transfer 206.7
Reply size [B]: header 167.0 content 1156.0 footer 2.0 (total 1325.0)
Reply status: 1xx=0 2xx=20 3xx=0 4xx=0 5xx=0
CPU time [s]: user 3.82 system 15.69 (user 9.6% system 39.3% total 48.9%)
Net I/O: 0.7 KB/s (0.0*10^6 bps)
Heres some with caching switched on
Total: connections 100 requests 94 replies 94 test-duration 24.899 s
Connection rate: 4.0 conn/s (249.0 ms/conn, <=7 concurrent connections)
Connection time [ms]: min 90.9 avg 142.9 max 265.7 median 144.5 stddev 37.1
Connection time [ms]: connect 1.4
Connection length [replies/conn]: 1.000
Request rate: 3.8 req/s (264.9 ms/req)
Request size [B]: 64.0
Reply rate [replies/s]: min 0.0 avg 4.7 max 9.6 stddev 5.4 (4 samples)
Reply time [ms]: response 136.7 transfer 4.9
Reply size [B]: header 186.0 content 5847.0 footer 1.0 (total 6034.0)
Reply status: 1xx=0 2xx=94 3xx=0 4xx=0 5xx=0
CPU time [s]: user 8.53 system 14.22 (user 34.3% system 57.1% total 91.4%)
Net I/O: 22.5 KB/s (0.2*10^6 bps)
So without using the cache we were getting an average of 37343.4 milliseconds to reply. With the cache on that drops to 147.9 … which is a fairly serious saving. This is of course when the machine is under load. If we just run some basic curl tests
Without cache
time curl http://203.97.214.51:8080/
real 0m2.167s
With cache
real 0m0.105s
So that matches up with what we saw with the load testing
We cant of course cache this page if the user is logged in, but thats fairly easy to handle, just check if the user is logged in, if not, use the cache.
In the librarian interface there are significant sections of a lot of pages that change very infrequently where we could win a lot with some caching.
I’ve created a branch and will work in it some more, and also make it publicly available so others can test.
Creating a Programmers manual for Koha 3
Using the neat little module Pod::Manual I whacked up a quick script to combine all the man pages for koha into 1, then output it as docbook (xml not sgml).
#!/usr/bin/perl
use Pod::Manual;
my $manual = Pod::Manual->new({ title => 'C4 Manual'});
my $path = $ARGV[0];
my @chapters = <$path/*>;
foreach $chapter (@chapters){
$chapter =~ s/$path///;
$chapter =~ s/.3pm//;
eval {$manual->add_chapter ($chapter);};
}
print $manual->as_docbook();
I then used the dblatex tools to convert it to pdf.
./manual.pl /path/to/man/files > file.xml
dblatex -tpdf file.xml
Got a messy git checkout?
If you are like me, you often want to rebase, or pull from a repo but have a bunch of local changes you don’t want to commit (yet) but also don’t want to lose, well git-stash is your answer.
This is really handy when working on Koha, you don’t have to muck around with local commits that get in your way when you are making patches to send upstream.
Fun with profiling
I had a bit of a play with some profiling tools and Koha today. This is the output when running a profiler over mainpage.pl. Looks like some nice optimisation could be done there.
Mirror of Koha git
Using the great service provided at repo.or.cz Ive set up a mirror of the public git repository. Here’s the http://repo.or.cz/w/koha.git mirror.
Koha on hostgator.com
I spent a bit of time working on getting this going tonight, nearly there, I just have to do a couple more bits of configuration and I should have it all up and running.
The main stumbling block is not having git on the hostgator server, so Ill just have to make releases on my home machine and copy them over and upgrade. Not too bad. In fact I’ve set up a cron job to do just that. So ill have nightly releases here.
UPDATE
Getting yaz to compile on hostgator so I can install the Zoom module is proving to be a stumbling block.
UPDATE 2
Ok, hostgator won’t install Yaz, and without a c compiler its a bit tricky to do a local install, i’ll have to try a different tack.
First day at my new job
Today was the first day at my new job which was exciting and interesting. Got my phone, email and computer set up and made a debian package the sysadmins can install on machines I need access too. All in all a great first day.
Lightning talk on Koha
Next months perl monger meeting is going to be lightning talks. I’m thinking to do one on Koha, it hs to be a maximum of 5 minutes long, so it’ll be fun to squeeze it into that. If I do one, ill post the text up here.