Celery 3.1 Documentation
not a global rate limit. To enforce a global rate limit (e.g. for an API with a maximum number of requests per second), you must restrict to a given queue. Task.time_limit The hard time limit, in seconds return False And you route every request to the same process, then it will keep state between requests. This can also be useful to cache resources, e.g. a base Task class that caches a database connection: transactions, making it less likely to experience the problem described above. However, enabling ATOMIC_REQUESTS on the database connection will bring back the transaction-per-request model and the race condition0 码力 | 887 页 | 1.22 MB | 1 年前3Celery 3.1 Documentation
not a global rate limit. To enforce a global rate limit (e.g. for an API with a maximum number of requests per second), you must restrict to a given queue. Task.time_limit The hard time limit, in seconds return False And you route every request to the same process, then it will keep state between requests. This can also be useful to cache resources, e.g. a base Task class that caches a database connection: transactions, making it less likely to experience the problem described above. However, enabling ATOMIC_REQUESTS on the database connection will bring back the transaction-per-request model and the race condition0 码力 | 607 页 | 2.27 MB | 1 年前3Celery 2.3 Documentation
KeyError: return False And you route every request to the same process, then it will keep state between requests. This can also be useful to keep cached resources: class DatabaseTask(Task): _db = None @property shutdown all workers >>> broadcast("shutdown, destination="worker1.example.com") Ping This command requests a ping from alive workers. The workers reply with the string ‘pong’, and that’s just about it. It feeds. Note that is one of the applications evented I/O is especially good at (asynchronous HTTP requests). You may want a mix of both Eventlet and multiprocessing workers, and route tasks according to0 码力 | 334 页 | 1.25 MB | 1 年前3Celery 2.3 Documentation
return False And you route every request to the same process, then it will keep state between requests. This can also be useful to keep cached resources: class DatabaseTask(Task): _db = None shutdown all workers >>> broadcast("shutdown, destination="worker1.example.com") Ping This command requests a ping from alive workers. The workers reply with the string ‘pong’, and that’s just about it. It feeds. Note that is one of the applications evented I/O is especially good at (asynchronous HTTP requests). You may want a mix of both Eventlet and multiprocessing workers, and route tasks according to0 码力 | 530 页 | 900.64 KB | 1 年前3Celery 2.4 Documentation
return False And you route every request to the same process, then it will keep state between requests. This can also be useful to keep cached resources: class DatabaseTask(Task): _db = None shutdown all workers >>> broadcast("shutdown, destination="worker1.example.com") Ping This command requests a ping from alive workers. The workers reply with the string ‘pong’, and that’s just about it. It feeds. Note that is one of the applications evented I/O is especially good at (asynchronous HTTP requests). You may want a mix of both Eventlet and multiprocessing workers, and route tasks according to0 码力 | 543 页 | 957.42 KB | 1 年前3Celery 2.5 Documentation
Release 2.5.5 And you route every request to the same process, then it will keep state between requests. This can also be useful to keep cached resources: class DatabaseTask(Task): _db = None @property shutdown all workers >>> broadcast("shutdown, destination="worker1.example.com") Ping This command requests a ping from alive workers. The workers reply with the string ‘pong’, and that’s just about it. It Adding/Reloading modules New in version 2.5. The remote control command pool_restart sends restart requests to the workers child processes. It is particularly useful for forcing the worker to import new modules0 码力 | 400 页 | 1.40 MB | 1 年前3Celery 2.5 Documentation
return False And you route every request to the same process, then it will keep state between requests. This can also be useful to keep cached resources: class DatabaseTask(Task): _db = None shutdown all workers >>> broadcast("shutdown, destination="worker1.example.com") Ping This command requests a ping from alive workers. The workers reply with the string ‘pong’, and that’s just about it. It Adding/Reloading modules New in version 2.5. The remote control command pool_restart sends restart requests to the workers child processes. It is particularly useful for forcing the worker to import new modules0 码力 | 647 页 | 1011.88 KB | 1 年前3Celery 2.4 Documentation
KeyError: return False And you route every request to the same process, then it will keep state between requests. This can also be useful to keep cached resources: class DatabaseTask(Task): _db = None @property shutdown all workers >>> broadcast("shutdown, destination="worker1.example.com") Ping This command requests a ping from alive workers. The workers reply with the string ‘pong’, and that’s just about it. It feeds. Note that is one of the applications evented I/O is especially good at (asynchronous HTTP requests). You may want a mix of both Eventlet and multiprocessing workers, and route tasks according to0 码力 | 395 页 | 1.54 MB | 1 年前3Celery v5.0.1 Documentation
Language interoperability can also be achieved exposing an HTTP endpoint and having a task that requests it (webhooks). What do I need? Version Requirements Celery version 5.0 runs on Python ❨3.6, adding a timeout to a web request using the requests [https://pypi.python.org/pypi/requests/] library: connect_timeout, read_timeout = 5.0, 30.0 response = requests.get(URL, timeout=(connect_timeout, read_timeout)) overwhelming the service with your requests. Fortunately, Celery’s automatic retry support makes it easy. Just specify the retry_backoff argument, like this: from requests.exceptions import RequestException0 码力 | 2313 页 | 2.13 MB | 1 年前3Celery v5.0.2 Documentation
Language interoperability can also be achieved exposing an HTTP endpoint and having a task that requests it (webhooks). What do I need? Version Requirements Celery version 5.0 runs on Python ❨3.6, adding a timeout to a web request using the requests [https://pypi.python.org/pypi/requests/] library: connect_timeout, read_timeout = 5.0, 30.0 response = requests.get(URL, timeout=(connect_timeout, read_timeout)) overwhelming the service with your requests. Fortunately, Celery’s automatic retry support makes it easy. Just specify the retry_backoff argument, like this: from requests.exceptions import RequestException0 码力 | 2313 页 | 2.14 MB | 1 年前3
共 51 条
- 1
- 2
- 3
- 4
- 5
- 6