Skip to main content

Simple utilites for image resizing and uploading and stuff like that.

Project description

Python package

Overview

The imagehelper package offers a simple interface for image resizing, optimizing and uploading image assets. Core image resizing operations are handled by the Pillow (PIL) package; S3 uploading is handled by boto3, and there are hooks for optimizing the images with the commandline tools: advpng, gifsicle, jpegtran, jpegoptim, optipng and pngcrush.

This library does not actually resize the images, it is used to define "recipes" for resizing images with Pillow and uploading them to S3 with boto3.

About

imagehelper is a fork of internal image helping routines that were built for FindMeOn.com around 2005. It has been actively maintained as an open source project since at least 2012.

imagehelper allows you to define a schema for resizing images as a simple dict, and will then easily resize them.

imagehelper also supports uploading the resized images - and an archival version - onto Amazon's S3 service.

imagehelper requires Pillow. Earlier versions relied on PIL or supported both. This really is an old package!

This package will try to import boto3 for communicating with AmazonS3. If you don't want to use S3, no worries - that is only optional and you can save to a local file.

The package was originally aimed at thumbnails, but it works for all resizing needs that are aimed at downsampling images.

If you have optimization applications like gifsicle, pngcrush and jpegtran installed in your environment, you can 'optimize' the output (and archive) files.

This is a barebones package that has NO FRAMEWORK DEPENDENCIES - which is a good thing. You define image transformation recipes using a simple dict, the package does the rest.

This package also tries to avoid writing to disk whenever possible, tempfiles (spooled) are avoided unless an external program is called. This package tries to pipe everything through file-like in-memory objects.

I could only find a single tool for resizing thumbnails on PyPi that did not require a framework, and that's really annoying.

The package is a bit awkard to use for a single task, but it was designed for repetitive tasks - as in a web application.

A typical usage is illustrated in the sections below. Also check the demo.py file to see how flexible this can be.

This package has been used in production for over a decade.

It supports Python2.7 and Python3. A lot of things could be done better and should be done better, but this works and is relatively fast.

Why ?

Imagine that you have a site that allows for user generated uploads, or you want to make video stills...

You can create a schema of image sizes...

IMAGE_SIZES = {
    'thumb': {
        'width': 32,
        'height': 32,
        'save_quality': 50,
        'suffix': 't1',
        'format':'JPEG',
        'constraint-method': 'fit-within',
        'filename_template': '%(guid)s-120x120.%(format)s',
    },
    'og:image': {
        'width': 200,
        'height': 200,
        'save_quality': 50,
        'suffix': 'og',
        'format':'JPEG',
        'constraint-method': 'ensure-minimum',
        'filename_template': '%(guid)s-og.%(format)s',
    },
}

And easily upload them:

# create some configs in your app

# config object for IMAGE_SIZES
resizerConfig = imagehelper.resizer.ResizerConfig(
    resizesSchema=IMAGE_SIZES,
    optimize_original=True,
    optimize_resized=True,
)

# config object for S3
saverConfig= imagehelper.saver.s3.SaverConfig(
    key_public = AWS_KEY_PUBLIC,
    key_private = AWS_KEY_SECRET,
    bucket_public_name = AWS_BUCKET_PUBLIC,
    bucket_archive_name = AWS_BUCKET_ARCHIVE,
)

# create some factories.
# factories are unnecessary. they just generate the workhorse objects for you
# they're very useful for cutting down code
# build one, then stash in your app

USE_FACTORY = True
if USE_FACTORY:
    rFactory = imagehelper.resizer.ResizerFactory(resizerConfig=resizerConfig)
    s3Factory = imagehelper.saver.s3.s3ManagerFactory(saverConfig=saverConfig, resizerConfig=rConfig, saverLogger=saverLogger)

    resizer = rFactory.resizer()
    s3Manager = s3Factory.manager()

else:
    resizer = imagehelper.resizer.Resizer(resizerConfig=resizerConfig)
    s3Manager = imagehelper.saver.s3.s3Manager(saverConfig=saverConfig, resizerConfig=resizerConfig, saverLogger=saverLogger)

# resize !
resizedImages = resizer.resize(imagefile=get_imagefile())

# upload the resized items
uploaded_files = s3Manager.files_save(resizedImages, guid="123")

# want to delete them?
deleted = s3Manager.files_delete(uploaded_files)

Behind the scenes, imagehelper does all the math and uploading.

Resizing Options

  • fit-within

Resizes item to fit within the bounding box, on both height and width. The resulting image will be the size of the bounding box or smaller.

  • fit-within:crop-to

Resizes the item along whichever axis ensures the bounding box is 100% full, then crops. The resulting image will be the size of the bounding box.

  • fit-within:ensure-width

Resizes item to fit within the bounding box, scaling height to ensure 100% width. The resulting image will be the size of the bounding box.

  • fit-within:ensure-height

Resizes item to fit within the bounding box, scaling width to ensure 100% height. The resulting image will be the size of the bounding box.

  • smallest:ensure-minimum

Resizes the item to cover the bounding box on both axis. One dimension may be larger than the bounding box.

  • exact:no-resize

Do not scale! Raises an exception if a scale must be made. This is a convenience for just saving/re-encoding files. For example, 100x100 must receive an image that is 100x100.

  • exact:proportion

Attempt to scale the image to an exact size. Raise an exception if it can't. Usually this is used to resample a 1:1 image, however this might be used to drop an image to a specific proportion. i.e. 300x400 can scale to 30x40, 300x400 but not 30x50.

Usage...

Check out the demo.py module - it offers a narrative demo of how to use the package. Be sure to include some Amazon S3 credentials in your environment.

imagehelper is NOT designed for one-off resizing needs. it's designed for a use in applications where you're repeatedly doing the same resizing.

The general program flow is this:

  1. Create Configuration objects to hold instructions
  2. Create Factory objects to hold the Configuration objects.
  3. Obtain a Worker object from the Factory to do the actual work (resizing or uploading)

You should typically create "Configuration" and "Factory" objects during application startup, and create/destroy a work for each request or event.

Here's a more in depth description

  1. Create a dict of "photo resizes" describing your schema.
  • keys prepended with save_ are passed on to PIL during the call to save (the prefix is removed)
  • you can decide what type of resizing you want. sometimes you want to crop, other times you want to fit within a box, other times you want to ensure a height or width. this makes your designers happy.
  1. create an array of image_resizes_selected -- the keys in the above schema you want to resize.

  2. you can pass these arguments into the routines themselves, or generate a imagehelper.resizer.ResizerConfig object or a imagehelper.resizer.ResizerFactory that you stash into your application.

  3. If you're saving to AmazonS3, create an imagehelper.saver.s3.SaverConfig config object to store your info. note that you can specify a public and private bucket.

    • resized thumbnails are saved to the public bucket
    • the original item is optionally saved to the archive, which is not viewable to the public. this is so you can do different sizing schemes in the future.
  4. You can define your own Amazon S3 logger, a class that provides two methods:

    class SaverLogger(object):
    def log_save(self, bucket_name=None, key=None, file_size=None, file_md5=None):
    pass
    def log_delete(self, bucket_name=None, key=None):
    pass
    

This will allow you to log what is uploaded into Amazon AWS on your side. This is hugely helpful, because Amazon uploads are not transaction safe to your application logic. There are some built-in precautions for this... but it's best to play things safely.

Items are currented saved to Amazon S3 as such:

Public:

  • Template: %(guid)s-%(suffix)s.%(format)s
  • Tokens:
    • guid: you must supply a guid for the file
    • suffix: this is set in the resize schema
    • format: this is dictated by the PIL format type

Archive:

  • Template: %(guid)s.%(format)s
  • Tokens:
    • guid: you must supply a guid for the file
    • format: this is dictated by the original format type PIL found

Here is an example photo_resize schema:

'jpeg_thumbnail-120': {
    'width': 120,
    'height': 120,
    'save_quality': 50,
    'suffix': 't120',
    'format':'JPEG',
    'constraint-method': 'fit-within',
    's3_bucket_public': 'my-test',
    'filename_template': '%(guid)s-%(suffix)s.%(format)s',
},

This would create a file on Amazon S3 with a guid you supply, such as 123123123:

/my-test/123123123-t120.jpg
_bucket_/_guid_-_suffix_._format_

string templates may be used to affect how this is saved. read the source for more info.

Transactional Support

If you upload something via imagehelper.saver.s3.S3Uploader().s3_upload(), the task is considered to be "all or nothing".

The actual uploading occurs within a try/except block, and a failure will "rollback" and delete everything that has been successfully uploaded.

If you want to integrate with something like the Zope transaction package, imagehelper.saver.s3.S3Uploader().files_delete() is a public function that expects as input the output of the s3_upload function -- a dict of tuples where the keys are resize names (from the schema) and the values are the (filename, bucket).

You can also define a custom subclass of imagehelper.saver.s3.SaverLogger that supports the following methods:

  • log_save(self, bucket_name=None, key=None, file_size=None, file_md5=None)
  • log_delete(self, bucket_name=None, key=None)

Every successful 'action' is sent to the logger. A valid transaction to upload 5 sizes will have 5 calls to log_save, an invalid transaction will have a log_delete call for every successful upload.

This was designed for a variety of use cases:

  • log activity to a file or non-transactional database connection, you get some efficient bookkeeping of s3 activity and can audit those files to ensure there is no orphan data in your s3 buckts.
  • log activity to StatsD or another metrics app to show how much activity goes on

FAQ - package components

  • errors - custom exceptions
  • image_wrapper - actual image reading/writing, resize operations
  • resizer - manage resizing operations
  • s3 - manage s3 communication
  • utils - miscellaneous utility fucntions

FAQ - deleting existing files ?

If you don't have a current mapping of the files to delete in s3 but you do have the archive filename and a guid, you can easily generate what they would be based off a resizerConfig/schema and the archived filename.

## fake the sizes that would be generated off a resize
resizer = imagehelper.resizer.Resizer(
    resizerConfig=resizerConfig,
    optimize_original=True,
    optimize_resized=True,
)
fakedResizedImages = resizer.fake_resultset(
    original_filename=archive_filename
)

## generate the filenames
deleter = imagehelper.saver.s3.SaverManager(
    saverConfig=saverConfig, resizerConfig=resizerConfig
)
targetFilenames = build.generate_filenames(fakedResizedImages, guid)

The original_filename is needed in fake_resultset, because a resultset tracks the original file and it's type. As of the 0.1.0 branch, only the extension of the filename is utilized.

FAQ - validate uploaded image ?

This is simple.

  1. Create a dumb resizer factory

    nullResizerFactory = imagehelper.resizer.ResizerFactory()

  2. Validate it

    try: resizer = nullResizerFactory.resizer( imagefile = uploaded_image_file, ) except imagehelper.errors.ImageError_Parsing as exc: raise ValueError('Invalid Filetype')

    grab the original file for advanced ops

    resizerImage = resizer.get_original() if resizerImage.file_size > MAX_FILESIZE_PHOTO_UPLOAD: raise ValueError('Too Big!')

Passing an imagefile to ResizerFactory.resizer or Resizer.__init__ will register the file with the resizer. This action creates an image_wrapper.ImageWrapper object from the file, which contains the original file and a PIL/Pillow object. If PIL/Pillow can not read the file, an error will be raised.

FAQ - what sort of file types are supported ?

All the reading and resizing of image formats happens in PIL/Pillow.

imagehelper tries to support most common file objects

imagehelper.image_wrapper.ImageWrapper our core class for reading files, supports reading the following file types

  • file (native python object, i.e. types.FileType`)
  • cgi.FieldStorage
  • StringIO.StringIO, cStringIO.InputType, cStringIO.OutputType

We try to "be kind and rewind" and call seek(0) on the underlying file when appropriate - but sometimes we forget.

The resize operations accepts the following file kwargs:

  • imagefile -- one of the above file objects
  • imageWrapper -- an instance of imagehelper.image_wrapper.ImageWrapper
  • file_b64 -- a base64 encoded file datastream. this will decoded into a cStringIO object for operations.

FAQ - using celery ?

Celery message brokers require serialized data.

In order to pass the task to celery, you will need to serialize/deserialize the data. imagehelper provides convenience functionality for this

nullResizerFactory = imagehelper.resizer.ResizerFactory()
resizer = nullResizerFactory.resizer(
    imagefile = uploaded_file,
)

# grab the original file for advanced ops
resizerImage = resizer.get_original()

# serialize the image
instructions = {
    'image_md5': resizerImage.file_md5,
    'image_b64': resizerImage.file_b64,
    'image_format': resizerImage.format,
}

# send to celery
deferred_task = celery_tasks.do_something.apply_async((id, instructions,))


# in celery...
@task
def do_something(id, instructions):
    ## resize the images
    resizer = resizerFactory.resizer(
        file_b64 = instructions['image_b64'],
    )
    resizedImages = resizer.resize()

How are optimizations handled?

Image optimizations are handled by piping the image through external programs. The idea (and settings) were borrowed from the mac app ImageOptim (https://github.com/pornel/ImageOptim or https://imageoptim.com/)

The default image Optimizations are LOSSLESS

Fine-grained control of image optimization strategies is achieved on a package level basis. In the future this could be handled within configuration objects. This strategy was chosen for 2 reasons:

  1. The config objects were getting complex
  2. Choosing an image optimization level is more of a "machine" concern than a "program" concern.

Not everyone has every program installed on their machines.

imagehelper will attempt to autodetect what is available on the first invocation of .optimize

If you are on a forking server, you can do this before the fork and save yourself a tiny bit of cpu cycles. yay.

import imagehelper
imagehelper.image_wrapper.autodetect_support()

The autodetect_support routing will set

imagehelper.image_wrapper[ program ]['available']

If you want to enable/disable them manuall, just edit

imagehelper.image_wrapper[ program ]['use']

You can also set a custom binary

imagehelper.image_wrapper[ program ]['binary']

Autodetection is handled by invoking each program's help command to see if it is installed.

JPEG

jpegs are optimized in a two-stage process.

jpegtran is used to do an initial optimization and ensure a progressive jpeg.
all jpeg markers are preserved.
jpegoptim is used on the output of the above, in this stage all jpeg markers
are removed.

The exact arguments are:

"""jpegtran -copy all -optimize -progressive -outfile %s %s""" % (fileOutput.name, fileInput.name)
"""jpegoptim --strip-all -q %s""" % (fileOutput.name, )

GIF

Gifsicle is given the following params -O3 --no-comments --no-names --same-delay --same-loopcount --no-warnings

The 03 level can be affected by changing the package level variable to a new integer (1-3)

imagehelper.image_wrapper.OPTIMIZE_GIFSICLE_LEVEL = 3

PNG

The package will try to use multiple png operators in sequence.

You can disable any png operator by changing the package level variable to False

OPTIMIZE_PNGCRUSH_USE = True
OPTIMIZE_OPTIPNG_USE = True
OPTIMIZE_ADVPNG_USE = True

pngcrush

pngcrush -rem alla -nofilecheck -bail -blacken -reduce -cc

optipng

optipng -i0 -o3

The optipng level can be set by setting the package level variable to a new integer (1-3)

OPTIMIZE_OPTIPNG_LEVEL = 3  # 6 would be best

advpng

advpng -4 -z

The advpng level can be set by setting the package level variable to a new integer (1-4)

OPTIMIZE_ADVPNG_LEVEL = 4  # 4 is max

what external libraries are needed to be installed

None. These are all optional! But here you go

ubuntu

apt-get install advancecomp  # advpng
apt-get install gifsicle
apt-get install libjpeg-turbo-progs  # jpegtran
apt-get install jpegoptim
apt-get install optipng
apt-get install pngcrush

ToDo

See TODO.txt

License

The code is licensed under the BSD license.

The sample image is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported (CC BY-NC-ND 3.0) http://creativecommons.org/licenses/by-nc-nd/3.0/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

imagehelper-0.7.0.tar.gz (565.5 kB view details)

Uploaded Source

File details

Details for the file imagehelper-0.7.0.tar.gz.

File metadata

  • Download URL: imagehelper-0.7.0.tar.gz
  • Upload date:
  • Size: 565.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/32.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.8 tqdm/4.62.3 importlib-metadata/5.0.0 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.10.2

File hashes

Hashes for imagehelper-0.7.0.tar.gz
Algorithm Hash digest
SHA256 d3643e1a08ed6fa3b4735f3e8d80655320cfc4668b35744861af9900599a7537
MD5 21b8f9cf06e78c52badb90e1efa2ec30
BLAKE2b-256 ce44f7351a7f498b7a59fda2a7cc8276182db6f6f34c00b08d123174abbf5bcb

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page