nameko_chassis

nameko_chassis.debug.debug_runner(runner)[source]

Dump debug information about service state to standard output.

Call this method from within nameko backdoor which exposes a runner local variable.

If rich is available, the output will be way prettier than you’d expect :)

nameko_chassis.debug.debug_state_rich(state: nameko_chassis.service.ServiceState) None[source]

Pretty-print service state using rich.

nameko_chassis.debug.debug_state_simple(state: nameko_chassis.service.ServiceState) None[source]

Print service state to stdout with some rudimentary formatting.

class nameko_chassis.dependencies.ContainerProvider(*args, **kwargs)[source]

Allows access to ServiceContainer running current worker.

get_dependency(worker_ctx)[source]

Returns a ServiceContainer instance which runs current worker.

class nameko_chassis.dependencies.OpenTelemetryConfig(*args, **kwargs)[source]

Configures OTel trace exporter over HTTP.

setup()[source]

Called on bound Extensions before the container starts.

Extensions should do any required initialisation here.

class nameko_chassis.dependencies.SentryLoggerConfig(*args, **kwargs)[source]
setup()[source]

Called on bound Extensions before the container starts.

Extensions should do any required initialisation here.

class nameko_chassis.dependencies.ServiceDiscoveryProvider(*args, **kwargs)[source]
get_dependency(worker_ctx: nameko.containers.WorkerContext) nameko_chassis.discovery.ServiceDiscovery[source]

Called before worker execution. A DependencyProvider should return an object to be injected into the worker instance by the container.

class nameko_chassis.discovery.ServiceDiscovery(client: pyrabbit.api.Client)[source]

Provides introspection for nameko services defined on the RabbitMQ cluster.

find_services() List[str][source]

Returns a list of service names which are available on the network.

is_nameko_service(queue_name: str) bool[source]

Checks if queue name matches pattern used by nameko for service queues.

exception nameko_chassis.health.ServiceTimeout[source]
nameko_chassis.health.is_service_responsive(service_proxy: nameko.rpc.Client, fail_gracefully=False, timeout: float = 5, method_name: str = 'say_hello') bool[source]

A poor man’s circuit breaker for nameko service proxies.

True circuit breaker would wrap each and every RPC method and monitor it’s error rate and duration. This implementation only checks if the service responds within timeout seconds. By default, it raises an exception if service is unreachable. However if fail_gracefully is True, function returns normally and it is up to the caller to implement some sort of fallback mechanism.

Parameters
  • service_proxy – nameko service proxy provided by RPCProxy

  • fail_gracefully – if True, don’t raise ServiceTimeout

  • timeout – timeout in seconds

  • method_name – which method to call to check if service is healthy

Raises

ServiceTimeout: if service is unresponsive and fail_gracefully is False

Returns

True if service is responsive

class nameko_chassis.service.Service[source]

Base class for nameko services.

query_state() Dict[str, Any][source]

Returns a detailed state of running service.

say_hello() str[source]

RPC method to ping the service to check if it can be reached.

serve_metrics(request: werkzeug.wrappers.request.Request) werkzeug.wrappers.response.Response[source]

Exposes Prometheus metrics over HTTP.

set_log_level(logger_name: str, level: int) str[source]

Temporarily override log level in a running service.

Useful for example for debugging a live service instance, where your default log level is INFO or higher to avoid clutter in logs. This RPC allows you to change log level while the application is running.

For example:

>>> n.rpc.my_service.set_log_level("some.module", logging.DEBUG)

Now your logs will include debug messages from some.module even if your static log configuration (dictConfig etc.) silenced them.

Caveat #1: Updating log level in this manner will only affect loggers acquired after this RPC call. So your code must call logging.get_logger() as late as possible. This unfortunately means that library code may or may not be affected - depends on how the library acquires its loggers.

Caveat #2: If your service runs in multiple replicas behind a load balancer, you must call this RPC method at least as many times as there are replicas to ensure that each replica will have its log level changed.

class nameko_chassis.service.ServiceState(version: str, service_name: str, uptime: float, entrypoints: List[str], dependencies: List[str], running_workers: int, max_workers: int, worker_states: List[nameko_chassis.service.WorkerState])[source]

Introspection result for an entire service, including running workers.

classmethod from_container(container: nameko.containers.ServiceContainer) nameko_chassis.service.ServiceState[source]

Introspects a service container and its workers to build ServiceState.

class nameko_chassis.service.WorkerState(class_name: str, method_name: str, args: List[str], kwargs: Dict[str, str], data: Dict[str, str], stacktrace: List[str])[source]

Attributes of a single running worker greenthread.