Packages

object LeaderElection

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. LeaderElection
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. class Live extends Service
  2. class LiveTemporary extends Service
  3. trait Service extends AnyRef

Value Members

  1. def configMapLock(lockName: String, retryPolicy: Schedule[Any, Any, Unit] = defaultRetryPolicy, deleteLockOnRelease: Boolean = true): ZLayer[ContextInfo with ConfigMaps with Pods, Nothing, LeaderElection]

    Simple leader election implementation

    Simple leader election implementation

    The algorithm tries creating a ConfigMap with a given name and attaches the Pod it is running in as an owner of the config map.

    If the ConfigMap already exists the leader election fails and retries with exponential backoff. If it succeeds then it runs the inner effect.

    When the code terminates normally the acquired ConfigMap gets released. If the whole Pod gets killed without releasing the resource, the registered ownership will make Kubernetes apply cascading deletion so eventually a new Pod can register the ConfigMap again.

  2. def customLeaderLock(lockName: String, retryPolicy: Schedule[Any, Any, Unit] = defaultRetryPolicy, deleteLockOnRelease: Boolean = true): ZLayer[ContextInfo with LeaderLockResources with Pods, Nothing, LeaderElection]

    Simple leader election implementation based on a custom resource

    Simple leader election implementation based on a custom resource

    The algorithm tries creating a LeaderLock resource with a given name and attaches the Pod it is running in as an owner of the config map.

    If the LeaderLock already exists the leader election fails and retries with exponential backoff. If it succeeds then it runs the inner effect.

    When the code terminates normally the acquired LeaderLock gets released. If the whole Pod gets killed without releasing the resource, the registered ownership will make Kubernetes apply cascading deletion so eventually a new Pod can register the LeaderLock again.

    This method requires the registration of the LeaderLock custom resource. As an alternative take a look at configMapLock().

  3. val defaultRetryPolicy: Schedule[Any, Any, Unit]

    Default retry policy for acquiring the lock

  4. def fromLock: ZLayer[LeaderLock with ContextInfo, Nothing, LeaderElection]

    Constructs a leader election interface using a given LeaderLock layer

    Constructs a leader election interface using a given LeaderLock layer

    For built-in leader election algorithms check configMapLock() and customLeaderLock().

  5. def leaseLock(lockName: String, leaseDuration: zio.Duration = 15.seconds, renewTimeout: zio.Duration = 10.seconds, retryPeriod: zio.Duration = 2.seconds): ZLayer[ContextInfo with Leases, Nothing, LeaderElection]

    Lease based leader election implementation

    Lease based leader election implementation

    The leadership is not guaranteed to be held forever, the effect executed in runAsLeader may be interrupted. It is recommended to retry runAsLeader in these cases to try to reacquire the lease.

    This is a reimplementation of the Go leaderelection package: https://github.com/kubernetes/client-go/blob/master/tools/leaderelection/leaderelection.go

    lockName

    Name of the lease resource

    leaseDuration

    Duration non-leader candidates must wait before acquiring leadership. This is measured against the time of the last observed change.

    renewTimeout

    The maximum time a leader is allowed to try to renew its lease before giving up

    retryPeriod

    Retry period for acquiring and renewing the lease