Using a real log file to load test a server

published Sep 23, 2012 11:10   by admin ( last modified Apr 15, 2014 10:55 )

 

Summary: These python scripts take a log file in the standard Apache combined format and load test a server of your choice (like a development server). You can speed up or slow down the simulation. It prints out a report on how long each request took, which you can use to do comparisons between different server configurations with regards to performance. It is tested on Ubuntu 12.04 with pycurl installed.

Note: Read the comments below this post for bugs in this code, I have unfortunately not had the time to test the fixes.

Update: There are at least two other tools that can use a log file to simulate the traffic, httperf  and siege . However they do not follow the timing of the log file as far as I can see, which playlogfiletoserver does.

Check the playlogfiletoserver github repository for newest versions of the files and installation and usage instructions. And do give me a pull request if you fix any oustanding bugs!

 

Last week I had to look into a problem with a Plone production server that had performed unsatisfactory a couple of times during the week under heavy load, with lagging response times. Now, how to trouble shoot what was the problem? I had a copy of the server that looked fine when clicking around in it, or when using the Apache benchmarking tool. The solution is of course to hit the testing server with lots of requests and see what actually happens.

However since the slowness only happened under certain load patterns, wouldn't it make sense to just use the log files from the times when the problems manifested themselves? I googled a bit and found repache, that is supposed to replay Apache log files to a server. It is written in C++ and does not compile on newer Ubuntus unless you add a couple of includes. Once that was done it promptly crashed on my access_log log file.

So, time to whip something together (note that the playlogfiletoserver github repository may have newer versions of the scipts than posted below). First a pre processor that takes a log file in the Apache combined log file format and outputs a python module containing a tuple of tuples with info on when to fire each request and what request that would be:

 

import datetime
import re

def get_timestamp(date):
    timestamp = datetime.datetime.strptime(date, '%d/%b/%Y:%H:%M:%S')
    return timestamp
    
def parseline(line):  
    linedict = {}
    line = line.replace('\\"', '%22')
    quoteprep = line.split('"')
    try:
        (linedict['ipnumber'], tmp, tmp, linedict['date'], tmp, tmp) = quoteprep[0].split(" ")
        (linedict['method'], linedict['path'], linedict['protocol']) = quoteprep[1].split(" ")
        (tmp, linedict['returncode'], linedict['bytestransferred'], tmp) = quoteprep[2].split(" ")
        linedict['url'] = quoteprep[3]
        linedict['date'] = linedict['date'][1:]
        
    except ValueError:
        print line
        raise
    return linedict
    
fi = open('access_log', 'r')
fo = open('generatedrequests.py', 'w')
fo.write('requests = (')

first_timestamp = 0
for line in fi:
    linedict = parseline(line)
    if not first_timestamp:
        first_timestamp = get_timestamp(linedict['date'])
        time_offset = 0
    else:
        time_offset = (get_timestamp(linedict['date']) - first_timestamp).seconds
    lineout = '''(%s,"%s"\n),''' % (str(time_offset), linedict['path'])
    fo.write(lineout)
    
fo.write(')')

Secondly, some kind of user agent is needed to do the actual requesting, and it needs to be able to run requests in parallel, lest we would not be load testing the server much if we just kindly wait for it to duly process each request in sequence. I found Pete Warden's pyparallelcurl that uses pycurl/libcurl to do its magic (pyparallelcurl is also included in the playlogfiletoserver github repository for convenience).

Thirdly and lastly the script to do the actual load testing:

import datetime
import time
from generatedrequests import requests as therequests
import pyparallelcurl
import pycurl

server = 'http://www.example.com'
speedup = 4
maxparallelrequests = 100

sleepduration = 0.5/speedup
t = datetime.datetime.now()
starttime = time.mktime(t.timetuple())


curloptions = {
    pycurl.SSL_VERIFYPEER: False,
    pycurl.SSL_VERIFYHOST: False,
    pycurl.USERAGENT: 'Play log file to server user agent',
    pycurl.FOLLOWLOCATION: True
}

#c.setopt(c.MAX_RECV_SPEED_LARGE, 10000)

parallel_curl = pyparallelcurl.ParallelCurl(maxparallelrequests, curloptions)

def go(request):
    parallel_curl.startrequest(server + request[1], on_request_done)
    return

   
def on_request_done(content, url, ch, search):
    
    if content is None:
        print "Fetch error for "+url
        return
    
    httpcode = ch.getinfo(pycurl.HTTP_CODE)   
    transfertime = ch.getinfo(pycurl.TOTAL_TIME)
    if httpcode != 200:
        print "%0.3f seconds fetch error %s for %s" % (transfertime, str(httpcode),url)
        return
    else:
        print "%0.3f seconds fetch worked %s for %s" % (transfertime, str(httpcode),url)
    print "********"


def timetogo(request):
    reqtime = request[0]
    t = datetime.datetime.now()
    now = time.mktime(t.timetuple())
    return reqtime  <= (now - starttime) * speedup


for request in therequests:
    while True:
        if timetogo(request):
            go(request)
            break
        else:
            time.sleep(sleepduration)
    
    

There are probably some faulty assumptions hiding in these few lines of code, feel free to point them out in the comments below in the issue tracker or fork the code on github.