Optimizing compiler for evaluating mathematical expressions on CPUs and GPUs.
Theano is a Python library that allows you to define, optimize, and efficiently evaluate mathematical expressions involving multi-dimensional arrays. It is built on top of NumPy. Theano features:
- tight integration with NumPy: a similar interface to NumPy’s. numpy.ndarrays are also used internally in Theano-compiled functions.
- transparent use of a GPU: perform data-intensive computations up to 140x faster than on a CPU (support for float32 only).
- efficient symbolic differentiation: Theano can compute derivatives for functions of one or many inputs.
- speed and stability optimizations: avoid nasty bugs when computing expressions such as log(1+ exp(x) ) for large values of x.
- dynamic C code generation: evaluate expressions faster.
- extensive unit-testing and self-verification: includes tools for detecting and diagnosing bugs and/or potential problems.
Theano has been powering large-scale computationally intensive scientific research since 2007, but it is also approachable enough to be used in the classroom (IFT6266 at the University of Montreal).
Modifications in the trunk since the last release
Theano 0.3.1 (2011-02-21)
- The theano shared variable attribute value is deprecated, use get_value() or set_value()!
- See http://deeplearning.net/software/theano/tutorial/aliasing.html
- Bugs fixed:
- The random number generator in theano/sandbox/rng_mrg.py did not always return the same sequence of number on the CPU and GPU.
- In some cases, there was a (possibly large) fraction of non-random garbage in the returned sequence.
- In python mode (not the default mode) when input of elemwise operation was an empty ndarray, we were not returning an empty ndarray.
- Scan cached the number of steps. This caused no problem because each time you called scan the number of steps would got refreshed. The problem was when you called ScanGrad which would use the cached number of steps without refreshing it. To be affected by this bug, one would have to compile two graph, one that would contain a Scan and the other the corresponding GradScan, and call the first function to cache the number of steps, and then call the second function with a different number of steps.
- In GpuConv, errors in conv_patch_stack_reduce when the entire kernel doesn’t fit into shared memory. The error was not found before as the impact was less then the relative tolerance of 1e-3. Now the relative tolerance is 1e-5.
- Crash fixed:
- Add a feature to not have an exception that makes Theano crash when taking the gradient on DimShuffle in some particular case.
- Compilation crash for GpuElemwise with tensor with high number of dimensions (~6 or more).
- Disabled C code generator that make gcc crash on complex type.
- Crash in optimization when an Op has no input.
- Output shape is now computed correctly for matrix-vector multiplication on GPU.
- In Scan, when using numbers as inputs, not symbolic variables.
- In GradScan, when there is only 1 inputs in the Scan.
- In GpuSum, bug in calculation of n_blocks for the 10 pattern. (Sum on the row of a matrix)
- Some segfault at exit with GPU code.
- New SpecifyShape op that allow to pass more shape info in the graph.
- Speed up gemv by a work around scipy gemv slowness when the matrix is in C order (the default).
- Remove join of only 1 element.
- During optimization, consider one more case in get_constant_value.
- cuda_shared.value = X now works inplace!
- cuda_shared_var.set_value(new_ndarray) will overwrite the old value inplace in the most common case.
- Allow to create a CudaNdarraySharedVariable from a CudaNdarray.
- New init_gpu_device theano flags.
- Fuse GpuElemwise more often (in the case where there are so many inputs that fusing them all would bust the 256 bytes limit of parameter to gpu function).
- CPU join of only 1 element that was not moved to the GPU.
- New features:
- tensor.reshape now makes dimensions of length 1 broadcastable.
- tensor.prod now implements the gradient.
- DebugMode now warns if an Op declared itself as returning a view of the input but did not do so.
- This behaviour is a problem, because it can block other Ops from being inplace on the same inputs. This could lower the reuse of memory.
- Sparse.structured_dot now works when both matrices are sparse
- Sparse type is now supported by the shape op, and the ShapeFeature optimizer works correctly with them.
- New 3D convolution ops, with CPU and GPU implementations.
- New colors in pydotprint.
- Documented lib.amdlibm and (new) init_gpu_device config variables.
- A new page (was done for 0.3 but an error was hiding it on the web page) on the memory aliasing contract of Theano.
- Revision to the Windows installation instructions.
- The cuda documentation is now generated on the web server.
- Better documentation of .theanorc and its sections.
- Unit tests:
- Stop usage of deprecated functions or syntax in the unit tests.
- Better testing of GPU convolution nets.
- Make more tests able to use different random seeds.
- Tests of sparse now use default mode, not a hard-coded one.
- Remove some tests of unimplemented features.
- The name of compiledir now includes the Python version to make it easier for people with many Python versions
- Added theano.tensor.std as a shortcut to sqrt(var(input=input, axis=axis)).
- Whitespace, tabulation and indentation clean-up in the code.
- Better detection of memory sharing between variables.
Release history Release notifications | RSS feed
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size Theano-0.3.1.tar.gz (896.8 kB)||File type Source||Python version None||Upload date||Hashes View|
|Filename, size Theano-0.3.1.zip (1.0 MB)||File type Source||Python version None||Upload date||Hashes View|