Python3.2.2 增加了并发库
http://www.python.org/dev/peps/pep-3148/
用过java并发框架的人可能看过后就乐了.
这个库很精简,主要有两个核心类: Executor and Future,Executor用来接收异步任务(包含参数),并返回一个Future.
Executor
submit(fn, *args, **kwargs)
map(func, *iterables, timeout=None)
shutdown(wait=True)
Executor还有两个子类ThreadPoolExecutor,ProcessPoolExecutor
Future Objects
cancel()
cancelled()
running()
done()
result(timeout=None)
exception(timeout=None)
add_done_callback(fn)
看两个例子:
Check Prime Example
from concurrent import futures
import mathPRIMES = [
112272535095293,
112582705942171,
112272535095293,
115280095190773,
115797848077099,
1099726899285419]def is_prime(n):
if n % 2 == 0:
return Falsesqrt_n = int(math.floor(math.sqrt(n)))
for i in range(3, sqrt_n + 1, 2):
if n % i == 0:
return False
return Truedef main():
with futures.ProcessPoolExecutor() as executor:
for number, prime in zip(PRIMES, executor.map(is_prime,
PRIMES)):
print('%d is prime: %s' % (number, prime))if __name__ == '__main__':
main()
Web Crawl Example
from concurrent import futures
import urllib.requestURLS = ['http://www.foxnews.com/',
'http://www.cnn.com/',
'http://europe.wsj.com/',
'http://www.bbc.co.uk/',
'http://some-made-up-domain.com/']def load_url(url, timeout):
return urllib.request.urlopen(url, timeout=timeout).read()def main():
with futures.ThreadPoolExecutor(max_workers=5) as executor:
future_to_url = dict(
(executor.submit(load_url, url, 60), url)
for url in URLS)for future in futures.as_completed(future_to_url):
url = future_to_url[future]
try:
print('%r page is %d bytes' % (
url, len(future.result())))
except Exception as e:
print('%r generated an exception: %s' % (
url, e))if __name__ == '__main__':
main()