Monday, July 26, 2010

Got back from holidays, and after copying my photos to my Windows I see (again) that the default Windows picture viewer doesn't respect the orientation flag in the EXIF tags.

I fired mogrify from imagemagick to rotate the pixels in the photos according to the EXIF orientation, so that stupid Windows can show them correctly

> mogrify -auto-orient *.JPG

So far so good, but then I suddenly I thought, is ImageMagick rotating JPEG photos loss less? For the non initiated, JPEG compress your photos very well, but at the expense of not storing exactly the color of each pixel. If you read and store again the photo, you're doing it probably with slightly different settings, so the program that stores the photo creates a new JPEG that stores the photo with more incorrect pixels. Even worse, it tries to reproduce the bad pixels the first encoding created, so the result is worse than what you expect. After a few transformations, what used to be a "slight invisible difference" between the original and the first JPEG, becomes a big drop in quality in the last JPEG. Most people regularly rotate several times the photos, resize them, and crop them. If after each step you save the photo and reload it, you're degrading the photo.

I did a quick test to check if ImageMagick rotates lossless-ly the photos, putting the rotated photo over the original in GIMP, choosing the difference filter. If everything is pitch black then it's lossless. I got something that looked black, but looking at the histogram showed that not everything was completely black. I then switched solid black to solid white, and the you can see the result here:


Arggh, so I lost a bit of quality on all my photos from my last trip to Paris. I then tried to make something constructive out of this, looked up this issue and the ImageMagick guys recommend to use jhead & jpegtrans to do lossless rotations, crop, resizes in JPEG images. These tools don't read the pixels and try to encode them again, they know how JPEG works are thus able to do these operations lossless-ly if you don't try to modify any of the JPEG blocks in the image (of 8x8 pixels).

So I tried it, and this time I tested it before I apply it to my new photos from Switzerland:

> jhead -autorot IMG_3778.JPG

And tried to see the difference again the GIMP. This time I got a much better histogram, but still not pitch black. Again, changing solid black but white showed some noise, more or less uniformly distributed over the image. Zooming in to 1:1 I got (cropped a 200x200 part of the image):


I don't understand this time why is there any difference at all. Anybody has any clue?

P.S.: To my girlfriend, that's why I'm so slow copying & selecting the photos from our trips...

Thursday, July 01, 2010

Performance bug in AppStats

I was trying to upload a CSV file with 100 entries to my local dev_appserver, but it was way too slow (30s). I first thought it was due to the datastore file implementation, and tried using the sqlite one, but it still took 30s to complete.

After a few hours putting time.clock all around the place, it turns out the problem was in AppStats. When AppStats collects all the info about a request, it stores in memcache the trace of every AppEngine API call. My code is like this:

@local_or_admin_required
def post(self):
# csv_data comes from FieldStorage, and these fields are not converted to
# Unicode by AppEngine / WebOb. So csv_data is a utf-8 bytestring.
csv_data = self.request.get('file_to_convert')
read_pos = csv.DictReader(csv_data.split('\n'))
generated_pos = [self.convert_press_office(po) for po in read_pos]
po_with_errors = []

if self.request.get('import_file_after_conversion') == 'true':
# Write the result to the datastore
for po in generated_pos:
name = po.name
del po.name

try:
update.add_press_office(name, **po)
except update.DuplicatedPressOfficeName:
...

add_press_office makes 3 API calls, so in total I'm making 300 API calls. For each call AppStats records the entire stack frame, with local variables. This includes csv_data, generated_pos, that are 50K and around 100K respectively. Total more than ~45M.

The 300 stack traces with pointers to local variables (csv_data, generated_pos, ...) are sharing the 50K and 100K of these variables, but as soon as you try to serialize it in a protocol buffer, it copies this information for each stack. And this happends in appstats/recording.py@356 in the function Recorder::get_both_protos_encoded. This function first encodes the full protocol buffer, and wastes 30s doing it. Then it discovers that "oh no! this thing is too big to keep it in memcache!" and it deletes the local variables from the stack traces. But at this point is too late and you already paid the 30s.

If you have this problem, add "appstats_MAX_LOCALS = 0" to your appengine_config.py, you will lose the value of local variables in AppStats but it will be faster.


Tuesday, May 04, 2010

Barcamp Málaga 2010

Al final me he dejado convencer y voy a organizar junto con Jose una Barcamp en Málaga. Sabemos seguro que habrá buen tiempo y bares para salir después de la barcamp, lo que pase en la barcamp ya es más incierto, pero parece que tendremos la asistencia de varios "startaperos", y que todos compartiremos nuestros secretillos (eso el que aún tenga alguno).

Si quieres saber lo que empresas que se crearon con dos duros en el bolsillo pero mucha ilusión y trabajo hicieron, entonces eres bienvenido, para más datos mira la web de la Barcamp en Málaga.