http client/server for asyncio
http client/server for asyncio
Python >= 3.3
aiohttp is offered under the BSD license.
To retrieve something from the web:
import aiohttp def get_body(url): response = yield from aiohttp.request('GET', url) return (yield from response.read_and_close())
You can use the get command like this anywhere in your asyncio powered program:
response = yield from aiohttp.request('GET', 'http://python.org') body = yield from response.read_and_close() print(body)
The signature of request is the following:
request(method, url, *, params=None, data=None, headers=None, cookies=None, files=None, auth=None, allow_redirects=True, max_redirects=10, encoding='utf-8', version=(1, 1), timeout=None, compress=None, chunked=None, expect100=False, connector=None, read_until_eof=True, loop=None )
It constructs and sends a request. It returns response object. Parameters are explained as follow:
method: HTTP method
url: Request url
params: (optional) Dictionary or bytes to be sent in the query string of the new request
data: (optional) Dictionary, bytes, or file-like object to send in the body of the request
headers: (optional) Dictionary of HTTP Headers to send with the request
cookies: (optional) Dict object to send with the request
files: (optional) Dictionary of ‘name’: file-like-objects for multipart encoding upload
auth: (optional) Auth tuple to enable Basic HTTP Auth
timeout: (optional) Float describing the timeout of the request
allow_redirects: (optional) Boolean. Set to True if POST/PUT/DELETE redirect following is allowed.
compress: Boolean. Set to True if request has to be compressed with deflate encoding.
chunked: Boolean or Integer. Set to chunk size for chunked transfer encoding.
expect100: Boolean. Expect 100-continue response from server.
connector: aiohttp.connector.BaseConnector instance to support connection pooling and session cookies.
read_until_eof: Read response until eof if response does not have Content-Length header.
loop: Optional event loop.
Paster configuration example:
[server:main] use = egg:gunicorn#main host = 0.0.0.0 port = 8080 worker_class = aiohttp.worker.AsyncGunicornWorker
Simple HTTP proxy support.
Get rid of __del__ methods
Use ResourceWarning instead of logging warning record.
Do not unquote client request urls.
Allow multple waiters on transport drain.
Do not return client connection to pool in case of exceptions.
Rename SocketConnector to TCPConnector and UnixSocketConnector to UnixConnector.
Connection flow control.
Http client session/connection pool refactoring.
Better handling for bad server requests.
Added client session reuse timeout.
Better client request cancellation support.
Better handling responses without content length.
Added HttpClient verify_ssl parameter support.
Log content-length missing warning only for put and post requests.
Better support for server exit.
Read response body until eof if content-length is not defined #14
Fix trailing char in allowed_methods.
Start slow request timer for first request.
Added utility method HttpResponse.read_and_close()
Added slow request timeout.
Enable socket SO_KEEPALIVE if available. (@polymorphm)
Better handling for process exit.
Allow to use custom HttpRequest client class.
Use gunicorn keepalive setting for async worker.
Log leaking responses.
python 3.4 compatibility
Resolve only AF_INET family, because it is not clear how to pass extra info to asyncio.
Allow to wait completion of request with HttpResponse.wait_for_close()
Handle exception in client request stream.
Prevent host resolving for each client request.
Added client support for expect: 100-continue header.
Added custom wsgi application close procedure
Fixed concurrent host failure in HttpClient
Added tcp connection timeout to http client
Better client connection errors handling
Gracefully handle process exit
Release history Release notifications | RSS feed
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.