Shahzad Bhatti Welcome to my ramblings and rants!

November 10, 2023

Building a Secured Family-friendly Password Manager with Multi-Factor Authentication

Filed under: Computing,Encryption,Password management — admin @ 10:49 pm

With the proliferation of online services and accounts, it has become almost impossible for users to remember unique and strong passwords for each of them. Some users use the same password across multiple accounts, which is risky because if one account is compromised, all other accounts are at risk. With increase of cyber threats such as 2022-Morgan-Stanley, 2019-Facebook, 2018-MyFitnessPal, 2019-CapitalOne, more services demand stronger and more complex passwords, which are harder to remember. Standards like FIDO (Fast IDentity Online), WebAuthn (Web Authentication), and Passkeys aim to address the problems associated with traditional passwords by introducing stronger, simpler, and more phishing-resistant user authentication methods. These standards mitigate Man-in-the-Middle attacks by using decentralized on-device authentication. Yet, their universal adoption remains a work in progress. Until then, a popular alternative for dealing with the password complexity is a Password manager such as LessPass, 1Password, and Bitwarden, which offer enhanced security, convenience, and cross-platform access. However, these password managers are also prone to security and privacy risks especially and become a single point of failure when they store user passwords in the cloud. As password managers may also store other sensitive information such as credit card details and secured notes, the Cloud-based password managers with centralized storage become high value target hackers. Many cloud-based password managers implement additional security measures such as end-to-end encryption, zero-knowledge architecture, and multifactor authentication but once hackers get access to the encrypted password vaults, they become vulnerable to sophisticated encryption attacks. For example, In 2022, LastPass, serving 25 million users, experienced significant security breaches. Attackers accessed a range of user data, including billing and email addresses, names, telephone numbers, and IP addresses. More alarmingly, the breach compromised customer vault data, revealing unencrypted website URLs alongside encrypted usernames, passwords, secure notes, and form-filled information. The access to the encrypted vaults allow “offline attacks” for password cracking attempts that may use powerful computers for trying millions of password guesses per second. In another incident, LastPass users were locked out of their accounts due to MFA reset after a security upgrade. In order to address these risks with cloud-based password managers, we have built a secured family-friendly password manager named “PlexPass” with an enhanced security and ease of use including multi-device support for family members but without relying on storing data in cloud.

1.0 Design Tenets and Features

The PlexPass is designed based on following tenets and features:

  • End-to-End Encryption: All data is encrypted using strong cryptographic algorithms. The decryption key will be derived from the user’s master password.
  • Zero-Knowledge Architecture: The password manager won’t have the ability to view the decrypted data unless explicitly instructed by the user.
  • No Cloud: It allows using the password manager to be used as a local command-line tool or as a web server for local hosting without storing any data in the cloud.
  • Great User Experience: It provides a great user-experience based on a command-line tool and a web-based responsive UI that can be accessed by local devices.
  • Strong Master Password: It encourages users to create a robust and strong master password.
  • Secure Password Generation: It allows users to generate strong, random passwords for users, reducing the temptation to reuse passwords.
  • Password Strength Analyzer: It evaluates the strength of stored passwords and prompt users to change weak or repeated ones.
  • Secure Import and Export: It allows users to import and export password vault data in a standardized, encrypted format so that users can backup and restore in case of application errors or device failures.
  • Data Integrity Checks: It verifies the integrity of the stored data to ensure it hasn’t been tampered with.
  • Version History: It stores encrypted previous versions of entries, allowing users to revert to older passwords or data if necessary.
  • Open-Source: The PlexPass is open-source so that the community can inspect the code, which can lead to the identification and rectification of vulnerabilities.
  • Regular Updates: It will be consistently updated to address known vulnerabilities and to stay aligned with best practices in cryptographic and security standards.
  • Physical Security: It ensures the physical security of the device where the password manager is installed, since the device itself becomes a potential point of vulnerability.
  • Data Breach Notifications: It allows uses to scan passwords with known breached password hashes (without compromising privacy) that may have been leaked in data breaches.
  • Multi-Device and Sharing: As a family-friendly password manager, PlexPass allows sharing passwords safely to the nearby trusted devices without the risks associated with online storage.
  • Clipboard Protection: It offers mechanisms like clearing the clipboard after a certain time to protect copied passwords.
  • Tagging and Organization: It provides users with the ability to organize entries using tags, categories, or folders for a seamless user experience.
  • Secure Notes: It stores encrypted notes and additional form-filled data.
  • Search and Filter Options: It provides intuitive search and filter capabilities.
  • Multi-Factor and Local Authentication: PlexPass supports MFA based on One-Time-Password (OTP), FIDO and WebAuthN for local authentication based on biometrics and multi-factor authentication based on hardware keys such as Yubikey.

2.0 Cryptography

2.1 Password Hashing

OWasp, IETF Best Practices and Sc00bz(security researcher) recommends following for the password hashing:

  • Use Argon2 (winner of the 2015 Password Hashing Competition) with an iteration count of 2, and 1 degree of parallelism (if not available then use scrypt with cost parameter of (2^17), a minimum block size of 8, and a parallelization parameter of 1).
  • For FIPS-140 compliance, it recommends PBKDF2 with work factor of 600,000+ and with an internal hash function of HMAC-SHA-256. Other settings include:
    • PBKDF2-HMAC-SHA1: 1,300,000 iterations
    • PBKDF2-HMAC-SHA256: 600,000 iterations
    • PBKDF2-HMAC-SHA512: 210,000 iterations
  • Consider using a pepper to provide additional defense in depth.

Many of the popular password managers fall short of these standards but PlexPass will support Argon2id with a memory cost of 64 MiB, iteration count of 3, and parallelism of 1; PBKDF2-HMAC-SHA256 with 650,000 iterations; and salt with pepper for enhanced security.

2.2 Encryption

PlexPass will incorporate a robust encryption strategy that utilizes both symmetric and asymmetric encryption methodologies, in conjunction with envelope encryption, detailed as follows:

  • Symmetric Encryption: Based on OWasp recommendations, the private account information are safeguarded with Symmetric key algorithm of AES (Advanced Encryption Standard) with GCM (Galois/Counter Mode) mode that provides confidentiality and authenticity and uses key size of 256 bits (AES-256) for the highest security. The Symmetric key is used for both the encryption and decryption of accounts and sensitive data.
  • Asymmetric Encryption: PlexPass employs Elliptic Curve Cryptography (ECC) based Asymmetric key algorithm with SECP256k1 standard for encrypting Symmetric keys and sharing data with other users based on public key infrastructure (PKI). The public key is used for encrypting keys and shared data, while the private key is used for decryption. This allows users to share encrypted data without the need to exchange a secret key over the network.
  • Envelope Encryption: PlexPass’s envelope encryption mechanism involves encrypting a Symmetric data encryption key (DEK) with a Asymmetric encryption key (KEK). The Symmetric DEK is responsible for securing the actual user data, while the Asymmetric KEK is used to encrypt and protect the DEK itself. The top-level KEK key is then encrypted with a Symmetric key derived from the master user password and a pepper key for the local device. This multi-tiered encryption system ensures that even if data were to be accessed without authorization, it would remain undecipherable without the corresponding KEK.

3.0 Data Storage and Network Communication

With Envelope Encryption strategy, PlexPass ensures a multi-layered protective barrier for user accounts and sensitive information. This security measure involves encrypting data with a unique Symmetric key, which is then further secured using a combination of the user’s master password and a device-specific pepper key. The pepper key is securely stored within Hardware Security Modules (HSMs), providing an additional layer of defense. To generate the user’s secret key, PlexPass relies on the master password in tandem with the device’s pepper key, while ensuring that the master password itself is never stored locally or on any cloud-based platforms. PlexPass allows a versatile range of access points, including command-line tools, REST API, and a user-friendly interface. Although PlexPass is primarily designed for local hosting, it guarantees secure browser-to-local-server communications through the implementation of TLS 1.3, reflecting the commitment to the highest standards of network security.

3.1 Data Encryption

Following diagram illustrates how data is encrypted with envelop encryption scheme:

The above diagram illustrates that a master secret key is derived from the combination of the user’s master password and a device-specific pepper key. The device pepper key is securely stored within HSM storage solutions, such as the MacOS Keychain, or as encrypted files on other platforms. Crucially, neither the master user password nor the secret key are stored on any local or cloud storage systems.

This master secret key plays a pivotal role in the encryption process: it encrypts the user’s private asymmetric key, which then encrypts the symmetric user key. The symmetric user key is utilized for the encryption of user data and messages. Furthermore, the user’s private key is responsible for encrypting the private key of each Vault, which in turn is used to encrypt the Vault’s symmetric key and the private keys of individual Accounts.

Symmetric keys are employed for the encryption and decryption of data, while asymmetric keys are used for encrypting and decrypting other encryption keys, as well as for facilitating the secure sharing of data between multiple users. This layered approach to encryption ensures robust security and privacy of the user data within the system.

4.0 Domain Model

The following section delineates the domain model crafted for implementing a password manager:

Class-diagram

4.1 User

A user refers to any individual utilizing the password manager, which may include family members or other users. The accounts information corresponding to each user is secured with a unique key, generated by combining the user’s master password with a device-specific pepper key.

4.2 Vault

A user has the capability to create multiple vaults, each serving as a secure storage space for account information and sensitive data, tailored for different needs. Additionally, users can grant access to their vaults to family members or friends, enabling them to view or modify shared credentials for Wi-Fi, streaming services, and other applications.

4.3 Account and Secure Notes

The Account entity serves as a repository for a variety of user data, which may include credentials for website access, credit card details, personal notes, or other bespoke attributes. Key attributes of the Account entity include:

  • label and description of account
  • username
  • password
  • email
  • website
  • category
  • tags
  • OTP and other MFA credentials
  • custom fields for credit cards, address and other data
  • secure notes for storing private notes

4.4 Password Policy

The Password Policy stipulates the guidelines for creating or adhering to specified password requirements, including:

  • Option for a random or memorable password.
  • A minimum quota of uppercase letters to include.
  • A requisite number of lowercase letters to include.
  • An essential count of digits to incorporate.
  • A specified number of symbols to be included.
  • The minimum allowable password length.
  • The maximum allowable password length.
  • An exclusion setting to omit ambiguous characters for clarity.

4.5 Messages

The Message structure delineates different categories of messages employed for managing background operations, distributing information, or alerting users about potential password breaches.

4.6 Hashing and Cryptography algorithms

PlexPass offers the option to select from a variety of robust hashing algorithms, including Pbkdf2HmacSha256 and ARGON2id, as well as cryptographic algorithms like Aes256Gcm and ChaCha20Poly1305.

4.7 PasswordAnalysis

PasswordAnalysis encapsulates the outcome of assessing a password, detailing aspects such as:

  • The strength of the password.
  • Whether the password has been compromised or flagged in “Have I Been Pwned” (HIBP) breaches.
  • Similarity to other existing passwords.
  • Similarity to previously used passwords.
  • Reuse of the password across multiple accounts.
  • The entropy level, indicating the password’s complexity.
  • Compliance with established password creation policies

4.8 VaultAnalysis

VaultAnalysis presents a comprehensive evaluation of the security posture of all credentials within a vault, highlighting the following metrics:

  • The total number of accounts stored within the vault.
  • The quantity of passwords classified as strong due to their complexity and resistance to cracking attempts.
  • The tally of passwords deemed to have moderate strength, providing reasonable but not optimal security.
  • The count of passwords considered weak and vulnerable to being easily compromised.
  • The number of passwords that are not only strong but also have not been exposed in breaches or found to be reused.
  • The amount of credentials that have been potentially compromised or found in known data breaches.
  • The number of passwords that are reused across different accounts within the vault.
  • The tally of passwords that are notably similar to other passwords in the vault, posing a risk of cross-account vulnerability.
  • The count of current passwords that share similarities with the user’s past passwords, which could be a security concern if old passwords have been exposed.

4.9 System Configuration

The system configuration outlines a range of settings that determine the data storage path, HTTP server parameters, public and private key specifications for TLS encryption, preferred hashing and cryptographic algorithms, and other essential configurations.

4.10 UserContext

PlexPass mandates that any operation to access or modify user-specific information, including accounts, vaults, and other confidential data, is strictly governed by user authentication. The ‘UserContext’ serves as a secure container for the user’s authentication credentials, which are pivotal in the encryption and decryption processes of hierarchical cryptographic keys, adhering to the principles of envelope encryption.

5.0 Database Model and Schema

In general, there is a direct correlation between the domain model and the database schema, with the latter focusing primarily on key identifying attributes while preserving the integrity of user, vault, and account details through encryption. Furthermore, the database schema is designed to manage the cryptographic keys essential for the secure encryption and decryption of the stored data. The following section details the principal entities of the database model:

5.1 UserEntity

The UserEntity captures essential user attributes including the user_id and username. It securely retains encrypted user data alongside associated salt and nonce—components utilized in the encryption and decryption process. This entity leverages a secret-key, which is generated through a combination of the user’s master password and a unique device-specific pepper key. Importantly, the secret-key itself is not stored in the database to prevent unauthorized access.

5.2 LoginSessionEntity

The LoginSessionEntity records the details of user sessions, functioning as a mechanism to verify user access during remote engagements through the API or web interfaces.

5.3 CryptoKeyEntity

The CryptoKeyEntity encompasses both asymmetric and symmetric encryption keys. The symmetric key is encrypted by the asymmetric private key, which itself is encrypted using the public key of the parent CryptoKeyEntity. Key attributes include:

  • The unique identifier of the key.
  • The identifier of the parent key, which, if absent, signifies it as a root key—this is enforced as non-null for database integrity.
  • The user who owns the crypto key.
  • The keyable_id linked through a polymorphic association.
  • The keyable_type, determining the nature of the association.
  • The salt utilized in the encryption process.
  • The nonce that ensures encryption uniqueness.
  • The public key utilized for encryption purposes.
  • The secured private key, which is encrypted and used for value encryption tasks.

Note: The keyable_id and keyable_type facilitates implementing polymorphic relationships so that CryptoKeyEntity can be associated with different types of objects such as Users, Vaults, and Accounts.

5.4 VaultEntity

The VaultEntity is the structural representation of a secure repository designed for the safekeeping of account credentials and sensitive information. The primary attributes of the VaultEntity are as follows:

  • The user ID of the vault’s owner, indicating possession and control.
  • The designated name given to the vault for identification.
  • The category or type of vault, specifying its purpose or nature.
  • The salt applied during the encryption process, enhancing security.
  • The nonce, a number used once to prevent replay attacks, ensuring the uniqueness of each encryption.
  • The vault’s contents, securely encrypted to protect the confidentiality of the information it holds.
  • Other metadata such as unique identifier, version and timestamp for tracking changes.

5.5 AccountEntity

The AccountEntity serves as the database abstraction for the Account object, which is responsible for storing various user data, including account credentials, secure notes, and other bespoke attributes. Its principal characteristics are:

  • The vault_id that links the account to its respective vault.
  • The archived_version, which holds historical data of the account for reference or restoration purposes.
  • The salt, a random data input that is used in conjunction with hashing to ensure the uniqueness of each hash and prevent attacks such as hash collisions.
  • The key-nonce, a one-time use number utilized in the encryption process to guarantee the security of each encryption operation.
  • The encrypted_value, which is the securely encrypted form of the account’s data, preserving the confidentiality and integrity of user information.
  • The hash of primary attributes, which functions as a unique fingerprint to identify and prevent duplicate accounts from being created inadvertently.
  • Other metadata such as unique identifier, version and timestamp for tracking changes.

5.6 ArchivedAccountEntity

The ArchivedAccountEntity functions as a historical repository for AccountEntity records. Whenever a password or another vital piece of information within an account is altered, the original state of the account is preserved in this entity. This allows users to conveniently review previous versions of their account data, providing a clear audit trail of changes over time.

5.7 UserVaultEntity

The UserVaultEntity acts as the relational bridge between individual User entities and VaultEntity records. It facilitates the shared access of a single VaultEntity among multiple users while enforcing specific access control measures and adherence to predefined policies. This entity enables collaborative management of vault data based on access control policies and user’s permissions.

5.8 MessageEntity

The MessageEntity is a storage construct for various types of messages. These messages facilitate user notifications and alerts, sharing of vaults and account details, and scheduling of background processes. The entity ensures that operations meant to be executed on behalf of the user, such as sending notifications or processing queued tasks, are handled efficiently and securely.

5.9 AuditEntity

The AuditEntity functions as a comprehensive record for monitoring user activities within the system, primarily for enhancing security oversight. Key attributes of this entity are as follows:

  • The user associated with the audit event.
  • The specific category of the audit event.
  • The originating ip-adderss from which the event was triggered.
  • A set of context parameters providing additional detail about the event.
  • The message which encapsulates the essence of the audit event.
  • Additional metadata that provides further insight into the audit occurrence.

5.10 ACLEntity

The ACLEntity is a structural component that dictates permissions within the system, controlling user access to resources such as Vaults. The principal attributes of this entity are outlined as follows:

  • The user-id to which the ACL pertains, determining who the permissions are assigned to.
  • The resource-type indicating the category of resource the ACL governs.
  • The resource-id which specifies the particular instance of the resource under ACL.
  • A permission mask that encodes the rights of access, such as read or write privileges.
  • The scope parameters that may define the context or extent of the permissions.
  • Supplementary metadata which could include the ACL’s identifier, version number, and the timestamp of its creation or last update.

6.0 Data Repositories

Data repositories act as the intermediary layer between the underlying database and the application logic. These repositories are tasked with providing specialized data access operations for their respective database models, such as the UserRepository, ACLRepository, LoginSessionRepository, and so forth. They offer a suite of standardized methods for data manipulation—adding, updating, retrieving, and searching entries within the database.

Each repository typically adheres to a common Repository interface, ensuring consistency and predictability across different data models. Additionally, they may include bespoke methods that cater to specific requirements of the data they handle. Leveraging Rust’s Diesel library, these repositories enable seamless interactions with relational databases like SQLite, facilitating the efficient execution of complex queries and ensuring the integrity and performance of data operations within the system.

7.0 Domain Services

Following diagram illustrates major components of the PlexPass application:

Components Diagram

The heart of the password manager’s functionality is orchestrated by domain services, each tailored to execute a segment of the application’s core business logic by interacting with data repository interfaces. These services encompass a diverse range of operations integral to the password manager such as:

7.1 AuthenticationService

AuthenticationService defiens operations for user sign-in, sign-out and multi-factor authentication such as:

#[async_trait]
pub trait AuthenticationService {
    // signin and retrieve the user.
    async fn signin_user(
        &self,
        username: &str,
        master_password: &str,
        otp_code: Option<u32>,
        context: HashMap<String, String>,
    ) -> PassResult<(UserContext, User, UserToken, SessionStatus)>;

    // logout user
    async fn signout_user(&self, ctx: &UserContext, login_session_id: &str) -> PassResult<()>;

    // start registering MFA key
    async fn start_register_key(&self,
                                ctx: &UserContext,
    ) -> PassResult<CreationChallengeResponse>;

    // finish MFA registration and returns hardware key with recovery code
    async fn finish_register_key(&self,
                                 ctx: &UserContext,
                                 name: &str,
                                 req: &RegisterPublicKeyCredential) -> PassResult<HardwareSecurityKey>;

    // start MFA signin
    async fn start_key_authentication(&self,
                                      ctx: &UserContext,
    ) -> PassResult<RequestChallengeResponse>;

    // finish MFA signin
    async fn finish_key_authentication(&self,
                                       ctx: &UserContext,
                                       session_id: &str,
                                       auth: &PublicKeyCredential) -> PassResult<()>;
    // reset mfa keys
    async fn reset_mfa_keys(&self,
                            ctx: &UserContext,
                            recovery_code: &str,
                            session_id: &str) -> PassResult<()>;
}

7.2 UserService

UserService is entrusted with user management tasks, including registration, updates, and deletion of user profiles. It defines following operations:

[async_trait]
pub trait UserService {

    // signup and create a user.
    async fn register_user(&self,
                           user: &User,
                           master_password: &str,
                           context: HashMap<String, String>, ) -> PassResult<UserContext>;

    // get user by id.
    async fn get_user(&self, ctx: &UserContext, id: &str) -> PassResult<(UserContext, User)>;

    // updates existing user.
    async fn update_user(&self, ctx: &UserContext, user: &User) -> PassResult<usize>;

    // delete the user by id.
    async fn delete_user(&self, ctx: &UserContext, id: &str) -> PassResult<usize>;

    // encrypt asymmetric
    async fn asymmetric_user_encrypt(&self,
                                     ctx: &UserContext,
                                     target_username: &str,
                                     data: Vec<u8>,
                                     encoding: EncodingScheme,
    ) -> PassResult<Vec<u8>>;

    // decrypt asymmetric
    async fn asymmetric_user_decrypt(&self,
                                     ctx: &UserContext,
                                     data: Vec<u8>,
                                     encoding: EncodingScheme,
    ) -> PassResult<Vec<u8>>;

    /// Generate OTP
    async fn generate_user_otp(&self, ctx: &UserContext) -> PassResult<u32>;
}

7.3 VaultService

VaultService facilitates the creation, modification, and deletion of vaults, in addition to managing access controls. It defines following operations:

#[async_trait]
pub trait VaultService {
    // create an vault.
    async fn create_vault(&self, ctx: &UserContext, vault: &Vault) -> PassResult<usize>;

    // updates existing vault.
    async fn update_vault(&self, ctx: &UserContext, vault: &Vault) -> PassResult<usize>;

    // get the vault by id.
    async fn get_vault(&self, ctx: &UserContext, id: &str) -> PassResult<Vault>;

    // delete the vault by id.
    async fn delete_vault(&self, ctx: &UserContext, id: &str) -> PassResult<usize>;

    // find all vaults by user_id without account summaries.
    async fn get_user_vaults(&self, ctx: &UserContext) -> PassResult<Vec<Vault>>;

    // account summaries.
    async fn account_summaries_by_vault(
        &self,
        ctx: &UserContext,
        vault_id: &str,
        q: Option<String>,
    ) -> PassResult<Vec<AccountSummary>>;
}

7.4 AccountService

AccountService oversees the handling of account credentials, ensuring secure storage, retrieval, and management. It defines following operations:

#[async_trait]
pub trait AccountService {
    // create an account.
    async fn create_account(&self, ctx: &UserContext, account: &Account) -> PassResult<usize>;

    // updates existing account.
    async fn update_account(&self, ctx: &UserContext, account: &Account) -> PassResult<usize>;

    // get account by id.
    async fn get_account(&self, ctx: &UserContext, id: &str) -> PassResult<Account>;

    // delete the account by id.
    async fn delete_account(&self, ctx: &UserContext, id: &str) -> PassResult<usize>;

    // find all accounts by vault.
    async fn find_accounts_by_vault(
        &self,
        ctx: &UserContext,
        vault_id: &str,
        predicates: HashMap<String, String>,
        offset: i64,
        limit: usize,
    ) -> PassResult<PaginatedResult<Account>>;

    // count all accounts by vault.
    async fn count_accounts_by_vault(
        &self,
        ctx: &UserContext,
        vault_id: &str,
        predicates: HashMap<String, String>,
    ) -> PassResult<i64>;
}

7.5 EncryptionService

EncryptionService provides the encryption and decryption operations for protecting sensitive data. It defines following operations:

pub trait EncryptionService {
    // generate private public keys
    fn generate_private_public_keys(&self,
                                    secret: Option<String>,
    ) -> PassResult<(String, String)>;

    // encrypt asymmetric
    fn asymmetric_encrypt(&self,
                          pk: &str,
                          data: Vec<u8>,
                          encoding: EncodingScheme,
    ) -> PassResult<Vec<u8>>;

    // decrypt asymmetric
    fn asymmetric_decrypt(&self,
                          sk: &str,
                          data: Vec<u8>,
                          encoding: EncodingScheme,
    ) -> PassResult<Vec<u8>>;


    // encrypt symmetric
    fn symmetric_encrypt(&self,
                         salt: &str,
                         pepper: &str,
                         secret: &str,
                         data: Vec<u8>,
                         encoding: EncodingScheme,
    ) -> PassResult<Vec<u8>>;

    // decrypt symmetric
    fn symmetric_decrypt(&self,
                         pepper: &str,
                         secret: &str,
                         data: Vec<u8>,
                         encoding: EncodingScheme,
    ) -> PassResult<Vec<u8>>;
}

7.6 ImportExportService

ImportExportService allows users to import account data into vaults or export it for backup or other purposes, ensuring data portability. It defines following operations:

#[async_trait]
pub trait ImportExportService {
    // import accounts.
    async fn import_accounts(&self,
                             ctx: &UserContext,
                             vault_id: Option<String>,
                             vault_kind: Option<VaultKind>,
                             password: Option<String>,
                             encoding: EncodingScheme,
                             data: &[u8],
                             callback: Box<dyn Fn(ProgressStatus) + Send + Sync>,
    ) -> PassResult<ImportResult>;

    // export accounts.
    async fn export_accounts(&self,
                             ctx: &UserContext,
                             vault_id: &str,
                             password: Option<String>,
                             encoding: EncodingScheme,
                             callback: Box<dyn Fn(ProgressStatus) + Send + Sync>,
    ) -> PassResult<(String, Vec<u8>)>;
}

Note: The import and export operations may take a long time so it supports a callback function to update user with progress of the operation.

7.7 MessageService

MessageSevice manages the creation, delivery, and processing of messages within the system, whether for notifications or data sharing. It defines following operations:

#[async_trait]
pub trait MessageService {
    // create an message.
    async fn create_message(&self, ctx: &UserContext, message: &Message) -> PassResult<usize>;

    // updates existing message flags
    async fn update_message(&self, ctx: &UserContext, message: &Message) -> PassResult<usize>;

    // delete the message by id.
    async fn delete_message(&self, ctx: &UserContext, id: &str) -> PassResult<usize>;

    // find all messages by vault.
    async fn find_messages_by_user(
        &self,
        ctx: &UserContext,
        kind: Option<MessageKind>,
        offset: i64,
        limit: usize,
    ) -> PassResult<PaginatedResult<Message>>;
}

7.8 PasswordService

PasswordService offers operations for the generation of secure passwords, alongside analytical features to assess password strength and security. It defines following operations:

#[async_trait]
pub trait PasswordService {
    // create strong password.
    async fn generate_password(&self, policy: &PasswordPolicy) -> Option<String>;

    // check strength of password.
    async fn password_info(&self, password: &str) -> PassResult<PasswordInfo>;

    // check strength of password.
    async fn password_compromised(&self, password: &str) -> PassResult<bool>;

    // check if email is compromised.
    async fn email_compromised(&self, email: &str) -> PassResult<String>;

    // check similarity of password.
    async fn password_similarity(&self, password1: &str, password2: &str) -> PassResult<PasswordSimilarity>;

    // analyze passwords and accounts of all accounts in given vault
    // It returns hashmap by account-id and password analysis
    async fn analyze_all_account_passwords(&self, ctx: &UserContext, vault_id: &str) -> PassResult<VaultAnalysis>;

    // analyze passwords and accounts of all accounts in all vaults
    // It returns hashmap by (vault-id, account-id) and password analysis
    async fn analyze_all_vault_passwords(&self, ctx: &UserContext) -> PassResult<HashMap<String, VaultAnalysis>>;

    // schedule password analysis for vault
    async fn schedule_analyze_all_account_passwords(&self, ctx: &UserContext, vault_id: &str) -> PassResult<()>;

    // schedule password analysis for all vaults
    async fn schedule_analyze_all_vault_passwords(&self, ctx: &UserContext) -> PassResult<()>;
}

7.9 ShareVaultAccountService

ShareVaultAccountService handles the intricacies of sharing vaults and accounts, enabling collaborative access among authorized users. It defines following operations:

/// Service interface for sharing vaults or accounts.
#[async_trait]
pub trait ShareVaultAccountService {
    // share vault with another user
    async fn share_vault(
        &self,
        ctx: &UserContext,
        vault_id: &str,
        target_username: &str,
        read_only: bool,
    ) -> PassResult<usize>;

    // share account with another user
    async fn share_account(
        &self,
        ctx: &UserContext,
        account_id: &str,
        target_username: &str,
    ) -> PassResult<usize>;

    // lookup usernames
    async fn lookup_usernames(
        &self,
        ctx: &UserContext,
        q: &str,
    ) -> PassResult<Vec<String>>;

    // handle shared vaults and accounts from inbox of messages
    async fn handle_shared_vaults_accounts(
        &self,
        ctx: &UserContext,
    ) -> PassResult<(usize, usize)>;
}

PlexPass employs a Public Key Infrastructure (PKI) for secure data sharing, whereby a user’s vault and account keys are encrypted using the intended recipient’s public key. This encrypted data is then conveyed as a message, which is deposited into the recipient’s inbox. Upon the recipient’s next login, they use their private key to decrypt the message. This process of decryption serves to forge a trust link, granting the recipient authorized access to the shared vault and account information, strictly governed by established access control protocols.

7.10 SettingService

SettingService allows managing user preferencs and settings with following operations:

#[async_trait]
pub trait SettingService {
   // create a setting.
   async fn create_setting(&self, ctx: &UserContext, setting: &Setting) -> PassResult<usize>;

   // updates existing setting.
   async fn update_setting(&self, ctx: &UserContext, setting: &Setting) -> PassResult<usize>;

   // delete the setting by kind and name.
   async fn delete_setting(
       &self,
       ctx: &UserContext,
       kind: SettingKind,
       name: &str,
   ) -> PassResult<usize>;

   // get setting by kind.
   async fn get_settings(&self, ctx: &UserContext, kind: SettingKind) -> PassResult<Vec<Setting>>;

   // get setting by id.
   async fn get_setting(
       &self,
       ctx: &UserContext,
       kind: SettingKind,
       name: &str,
   ) -> PassResult<Setting>;
}

7.11 OTPService

OTPService defines operations for managing One-Time-Passwords (OTP):

#[async_trait]
pub trait OTPService {
   /// Generate OTP
   async fn generate_otp(&self, secret: &str) -> PassResult<u32>;
   /// Extract OTP secret from QRCode
   async fn convert_from_qrcode(&self, ctx: &UserContext, image_data: &[u8]) -> PassResult<Vec<AccountResponse>>;
   /// Create QRCode image for OTP secrets
   async fn convert_to_qrcode(&self, ctx: &UserContext,
                              secrets: Vec<&str>,
   ) -> PassResult<Vec<u8>>;
   /// Extract OTP secret from QRCode file
   async fn convert_from_qrcode_file(&self, ctx: &UserContext,
                                     in_path: &Path) -> PassResult<Vec<AccountResponse>>;
   /// Create QRCode image file for OTP secrets
   async fn convert_to_qrcode_file(&self, ctx: &UserContext, secrets: Vec<&str>,
                                   out_path: &Path) -> PassResult<()>;
}

7.12 AuditLogService

AuditLogService specializes in the retrieval and querying of audit logs, which are automatically generated to track activities for security monitoring. It defines following operations:

#[async_trait]
pub trait AuditLogService {
    async fn find(&self,
            ctx: &UserContext,
            predicates: HashMap<String, String>,
            offset: i64,
            limit: usize,
    ) -> PassResult<PaginatedResult<AuditLog>>;
}

8.0 API and UI Controllers

PlexPass employs API controllers to establish RESTful endpoints and UI controllers to manage the rendering of the web interface. Typically, there’s a direct correlation between each API and UI controller and their respective domain services. These controllers act as an intermediary, leveraging the domain services to execute the core business logic.

9.0 Commands

PlexPass adopts the command pattern for its command line interface, where each command is associated with a specific user action within the password management system.

10.0 Design Decisions

The architectural considerations for the design and implementation of PlexPass – a password manager – encompassed several key strategies:

  1. Security-First Approach: PlexPass design ensured the highest level of security for stored credentials was paramount. This involved integrating robust encryption methods, such as AES-256 for data at rest and TLS 1.3 for data in transit, alongside employing secure hashing algorithms for password storage.
  2. User-Centric Design: User experience was prioritized by providing a clean, intuitive interface and seamless interactions, whether through a command-line interface, RESTful APIs, or a web application.
  3. Performance: PlexPass chose Rust for implementation to leverage its performance, safety, and robustness, ensuring a highly secure and efficient password manager.
  4. Modular Structure: PlexPass was designed with modular architecture by segmenting the application into distinct services, controllers, and repositories to facilitate maintenance and future enhancements.
  5. Object/Relation Mapping: PlexPass utilizes the Diesel framework for its database operations, which offers an extensive ORM toolkit for efficient data handling and compatibility with various leading relational databases.
  6. MVC Architecture: PlexPass employs the Model-View-Controller (MVC) architectural pattern to structure its web application, enhancing the clarity and maintainability of the codebase. In this architecture, the Model component represents the data and the business logic of the application. It’s responsible for retrieving, processing, and storing data, and it interacts with the database layer. The model defines the essential structures and functions that represent the application’s core functionality and state. The View utilizes the Askama templating engine, a type-safe and fast Rust templating engine, to dynamically generate HTML content. The Controller acts as an intermediary between the Model and the View.
  7. Multi-Factor Authentication: PlexPass supports Multi-Factor Authentication based on One-Time-Passwords (OTP), FIDO, WebAuthN, and YubiKey. It required careful implementation of these standards using widely used libraries when available. In addition, it required handling a number of edge cases such as losing a security device and adding multi-factor authentication to REST APIS and CLI tools and not just protecting the UI application.
  8. Extensibility and Flexibility: PlexPass design considered future extensions to allow for additional features such as shared vaults and multi-factor authentication to be added without major overhauls.
  9. Internationalization and Localization: PlexPass employs the Fluent library, a modern localization system designed for natural-sounding translations. This ensures that PlexPass user-interface is linguistically and culturally accessible to users worldwide.
  10. Authorization and Access Control: PlexPass rigorously upholds stringent ownership and access control measures, guaranteeing that encrypted private data remains inaccessible without appropriate authentication. Furthermore, it ensures that other users can access shared Vaults and Accounts solely when they have been explicitly authorized with the necessary read or write permissions.
  11. Cross-Platform Compatibility: PlexPass design ensured compatibility across different operating systems and devices, enabling users to access their password vaults from any platform.
  12. Privacy by Design: User privacy was safeguarded by adopting principles like minimal data retention and ensuring that sensitive information, such as master passwords, is never stored in a file or persistent database.
  13. Asynchronous Processing: PlexPass uses asynchronous processing for any computational intenstive tasks such as password analysis so that UI and APIs are highly responsive.
  14. Data Portability: PlexPass empowers users with full control over their data by offering comprehensive import and export features, facilitating effortless backup and data management.
  15. Robust Error Handling and Logging: PlexPass applies comprehensive logging, auditing and error-handling mechanisms to facilitate troubleshooting and enhance the security audit trail.
  16. Compliance with Best Practices: PlexPass design adhered to industry best practices and standards for password management and data protection regulations throughout the development process.
  17. Health Metrics: PlexPass incorporates Prometheus, a powerful open-source monitoring and alerting toolkit, to publish and manage its API and business service metrics. This integration plays a crucial role in maintaining the reliability and efficiency of the system through enhanced monitoring capabilities.

11.0 User Guide

The following section serves as a practical guide for utilizing PlexPass, a secured password management solution. Users have the convenience of interacting with PlexPass through a variety of interfaces including a command-line interface (CLI), RESTful APIs, and a user-friendly web application.

11.1 Build and Installation

Checkout PlexPass from https://github.com/bhatti/PlexPass and then build using:

git clone git@github.com:bhatti/PlexPass.git
cd PlexPass
cargo build --release && ./target/release/plexpass server

Alternatively, you can use Docker for the server by pulling plexpass image as follows:

docker pull plexobject/plexpass:latest
docker run -p 8080:8080 -p 8443:8443 -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY \
	-e DATA_DIR=/data -e CERT_FILE=/data/cert-pass.pem \
    -e KEY_FILE=/data/key-pass.pem -v $PARENT_DIR/PlexPassData:/data plexpass server

Note: You only need to run server when using REST APIs or a UI for the web application.

11.2 User Registration

Before engaging with the system, users are required to complete the registration process. They have the following options to establish their access:

11.2.1 Command Line

./target/release/plexpass -j true --master-username charlie --master-password *** create-user

The -j argument generates a JSON output such as:

{
  "user_id": "3b12c573-d4e4-4470-a1b4-ac7689c40e8a",
  "version": 0,
  "username": "charlie",
  "name": null,
  "email": null,
  "locale": null,
  "light_mode": null,
  "icon": null,
  "attributes": [],
  "created_at": "2023-11-06T04:25:28.004321",
  "updated_at": "2023-11-06T04:25:28.004325"
}

11.2.2 Docker CLI

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY -e DATA_DIR=/data \
	-v $PARENT_DIR/PlexPassData:/data plexpass -j true \
    --master-username charlie --master-password *** create-user

11.2.3 REST API

curl -v -k https://localhost:8443/api/v1/auth/signup \
	--header "Content-Type: application/json; charset=UTF-8" \
    -d '{"username": "david", "master_password": "***"}'

or

headers = {'Content-Type': 'application/json'}
data = {'username': 'charlie', 'master_password': '***'}
resp = requests.post(SERVER + '/api/v1/auth/signin', json = data,
                     headers = headers, verify = False)

11.2.4 Web Application UI

Once, the server is started, you can point a browser to the server, e.g., https://localhost:8443 and it will show you interface for signin and registration:

Sign up UI

11.3 User Signin

The user-signin is requied when using REST APIs but CLIBefore engaging with the system, users are required to complete the registration process. The REST API will generate a JWT Token, which will be required for accessing all other APIs, e.g.,

11.3.1 REST APIs

curl -v -k https://localhost:8443/api/v1/auth/signin \
	--header "Content-Type: application/json; charset=UTF-8" \
    -d '{"username": "bob", "master_password": ""}'

It will show the JWT Token in the response, e.g.,

< HTTP/2 200
< content-length: 50
< content-type: application/json
< access_token: eyJ0eXA***
< vary: Origin, Access-Control-Request-Method, Access-Control-Request-Headers
< date: Tue, 07 Nov 2023 20:19:42 GMT
<

11.3.2 WebApp UI

Alternatively, you can signin to the web application if you have already registered, e.g.,

Note: Once, you are signed in, you will see all your vaults and accounts as follows but we will skip the Web UI from most of the user-guide section:

Home UI

Note: PlexPass Web application automatically flags weak or compromised passwords with red background color.

11.4.1 Command Line Help

You can use -h argument to see full list of commands with PlexPass CLI, e.g.,

PlexPass - a locally hostable secured password manager

Usage: plexpass [OPTIONS] <COMMAND>

Commands:
  server

  get-user

  create-user

  update-user

  delete-user

  create-vault

  update-vault

  delete-vault

  get-vault

  get-vaults

  create-account

  update-account

  get-account

  get-accounts

  delete-account

  query-audit-logs

  create-category

  delete-category

  get-categories

  generate-private-public-keys

  asymmetric-encrypt

  asymmetric-decrypt

  asymmetric-user-encrypt

  asymmetric-user-decrypt

  symmetric-encrypt

  symmetric-decrypt

  import-accounts

  export-accounts

  generate-password

  password-compromised

  password-strength

  email-compromised

  analyze-vault-passwords

  analyze-all-vaults-passwords

  search-usernames

  generate-otp

  generate-account-otp

  generate-user-otp

  generate-api-token

  reset-multi-factor-authentication

  share-vault

  share-account

  help
          Print this message or the help of the given subcommand(s)

Options:
  -j, --json-output <JSON_OUTPUT>
          json output of result from action [default: false] [possible values: true, false]
  -d, --data-dir <DATA_DIR>
          Sets a data directory
      --device-pepper-key <DEVICE_PEPPER_KEY>
          Device pepper key
      --crypto-algorithm <CRYPTO_ALG>
          Sets default crypto algorithm [possible values: aes256-gcm, cha-cha20-poly1305]
      --hash-algorithm <HASH_ALG>
          Sets default crypto hash algorithm [possible values: pbkdf2-hmac-sha256, argon2id]
      --master-username <MASTER_USERNAME>
          The username of local user
      --master-password <MASTER_PASSWORD>
          The master-password of user
      --otp-code <OTP_CODE>
          The otp-code of user
  -c, --config <FILE>
          Sets a custom config file
  -h, --help
          Print help information
  -V, --version
          Print version information

11.4 User Profile

11.4.1 Command Line

You can view your user profile using:

./target/release/plexpass -j true --master-username charlie \
	--master-password *** get-user

Which will show your user profile such as:

{
  "user_id": "d163a4bb-6767-4f4c-845f-86874a04fe20",
  "version": 0,
  "username": "charlie",
  "name": null,
  "email": null,
  "locale": null,
  "light_mode": null,
  "icon": null,
  "attributes": [],
  "created_at": "2023-11-07T20:30:32.063323960",
  "updated_at": "2023-11-07T20:30:32.063324497"
}

11.4.2 REST API

You can view your user profile using:

curl -k https://localhost:8443/api/v1/users/me \
	--header "Content-Type: application/json; charset=UTF-8"  \
    --header "Authorization: Bearer $AUTH_TOKEN"

11.4.3 Docker CLI

You can view your user profile with docker CLI as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY \
	-e DATA_DIR=/data -v $PARENT_DIR/PlexPassData:/data plexpass -j true \
    --master-username charlie --master-password *** get-user

11.5 Update User Profile

11.5.1 Command Line

You can update your user profile using CLI as follows:

./target/release/plexpass -j true --master-username charlie \
	--master-password *** update-user --name "Charles" --email "charlie@mail.com"

11.5.2 REST API

You can update your user profile using REST APIs as follows:

curl -k https://localhost:8443/api/v1/users/me 
	--header "Content-Type: application/json; charset=UTF-8"  
    --header "Authorization: Bearer $AUTH_TOKEN" -d '{"user_id"...}'

11.5.3 Docker CLI

You can update your user profile using docker CLI as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY \
	-e DATA_DIR=/data -v $PARENT_DIR/PlexPassData:/data \
    -j true --master-username charlie \
	--master-password *** update-user --name "Charles" --email "charlie@mail.com"

11.5.4 Web UI

You can also update your user profile using web UI as displayed in following screenshot:

Edit Profile

The UI allows you to view/edit user profile, manage security-keys for multi-factor authentication and generate API tokens:

Settings

11.5b Update User Password

You can update your user password using CLI as follows:

./target/release/plexpass -j true --master-username george@cat.us --master-password ** \
    change-user-password --new-password ** --confirm-new-password **

11.5b.2 REST API

You can update your user password using REST APIs as follows:

curl -k https://localhost:8443/api/v1/users/me 
	--header "Content-Type: application/json; charset=UTF-8"  
    --header "Authorization: Bearer $AUTH_TOKEN" -d '{"old_password": "*", "new_password": "*", "confirm_new_password": "*"}'

11.5b.3 Docker CLI

You can update your user profile using docker CLI as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY 
	-e DATA_DIR=/data -v $PARENT_DIR/PlexPassData:/data 
    -j true --master-username george@cat.us --master-password ** \
    change-user-password --new-password ** --confirm-new-password **

11.5b.4 Web UI

You can also update your user password from user settings using web UI as displayed in following screenshot:

change-password

11.6 Creating Vaults

PlexPass automatically creates a few Vaults upon registration but you can create additional vaults as follows:

11.6.1 Command Line

You can create new Vault using CLI as follows:

./target/release/plexpass -j true --master-username eddie \
	--master-password *** create-vault --title MyVault --kind Logins

11.6.2 REST API

You can create new Vault using REST API as follows:

curl -v -k -X POST https://localhost:8443/api/v1/vaults \
	--header "Content-Type: application/json; charset=UTF-8" \
    --header "Authorization: Bearer $AUTH_TOKEN" \
    -d '{"title": "NewVault"}'

11.6.3 Docker CLI

You can create new Vault using Docker CLI as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY \
	-e DATA_DIR=/data -v $PARENT_DIR/PlexPassData:/data plexpass \
    -j true --master-username frank --master-password ** \
    create-vault --title MyVault

11.7 Quering Vaults

11.7.1 Command Line

You can query all Vaults using CLI as follows:

./target/release/plexpass -j true --master-username eddie \
	--master-password ** get-vaults

Which will show list of vaults such as:

[
  {
    "vault_id": "44c0f4bc-8aca-46ac-a80a-9bd25c8f06aa",
    "version": 0,
    "owner_user_id": "c81446b7-8de4-41d7-b5a7-36d4075777bc",
    "title": "Identity",
    "kind": "Logins",
    "created_at": "2023-11-08T03:45:44.163762",
    "updated_at": "2023-11-08T03:45:44.163762"
  },
  {
    "vault_id": "070ba646-192b-47df-8134-c6ed40056575",
    "version": 0,
    "owner_user_id": "c81446b7-8de4-41d7-b5a7-36d4075777bc",
    "title": "Personal",
    "kind": "Logins",
    "created_at": "2023-11-08T03:45:44.165378",
    "updated_at": "2023-11-08T03:45:44.165378"
  },
  ..
]

11.7.2 REST API

You can query Vaults using REST API as follows:

curl -v -k https://localhost:8443/api/v1/vaults \
	--header "Content-Type: application/json; charset=UTF-8" \
    --header "Authorization: Bearer $AUTH_TOKEN"

11.7.3 Docker CLI

You can create new Vault using Docker CLI as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY \
	-e DATA_DIR=/data -v $PARENT_DIR/PlexPassData:/data \
    plexpass -j true --master-username frank \
    --master-password ** get-vaults

11.8 Show Specific Vault Data

11.8.1 Command Line

You can query specific Vault using CLI as follows:

./target/release/plexpass -j true --master-username eddie \
	--master-password ** --vault-id 44c0f4bc-8aca-46ac-a80a-9bd25c8f06aa

Which will show list of vaults such as:

{
  "vault_id": "44c0f4bc-8aca-46ac-a80a-9bd25c8f06aa",
  "version": 0,
  "owner_user_id": "c81446b7-8de4-41d7-b5a7-36d4075777bc",
  "title": "Identity",
  "kind": "Logins",
  "icon": null,
  "entries": null,
  "analysis": null,
  "analyzed_at": null,
  "created_at": "2023-11-08T03:45:44.163762",
  "updated_at": "2023-11-08T03:45:44.163762"
}

11.8.2 REST API

You can show a specific Vault using REST API as follows where ’44c0f4bc-8aca-46ac-a80a-9bd25c8f06aa’ is the vault-id:

curl -v -k https://localhost:8443/api/v1/vaults/44c0f4bc-8aca-46ac-a80a-9bd25c8f06aa \
	--header "Content-Type: application/json; charset=UTF-8" \
    --header "Authorization: Bearer $AUTH_TOKEN"

11.8.3 Docker CLI

You can create new Vault using Docker CLI as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY \
	-e DATA_DIR=/data -v $PARENT_DIR/PlexPassData:/data plexpass \
    -j true --master-username frank --master-password * \
    get-vault --vault-id 44c0f4bc-8aca-46ac-a80a-9bd25c8f06aa

11.9 Updating a Vault Data

11.9.1 Command Line

You can update a Vault using CLI as follows:

./target/release/plexpass -j true --master-username eddie \
	--master-password *** update-vault --vault-id 44c0f4bc-8aca-46ac-a80a-9bd25c8f06aa \
    --icon newicon --title new-title

11.9.2 REST API

You can update a Vault using REST API as follows:

curl -v -k -X PUT https://localhost:8443/api/v1/vaults/44c0f4bc-8aca-46ac-a80a-9bd25c8f06aa \
	--header "Content-Type: application/json; charset=UTF-8" \
    --header "Authorization: Bearer $AUTH_TOKEN"  \
    -d '{"vault_id": "44c0f4bc-8aca-46ac-a80a-9bd25c8f06aa", "version": 0, "title": "new-title"}'

11.9.3 Docker CLI

You can update a Vault using Docker CLI as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY \
	-e DATA_DIR=/data -v $PARENT_DIR/PlexPassData:/data plexpass \
    -j true --master-username frank --master-password *** \
    update-vault --vault-id $44c0f4bc-8aca-46ac-a80a-9bd25c8f06aa --title new-title

11.10 Deleting a Vault Data

11.10.1 Command Line

You can delete a Vault using CLI as follows:

./target/release/plexpass -j true --master-username eddie \
	--master-password *** delete-vault --vault-id 44c0f4bc-8aca-46ac-a80a-9bd25c8f06aa

11.10.2 REST API

You can delete a Vault using REST API as follows:

curl -v -k -X DELETE https://localhost:8443/api/v1/vaults/44c0f4bc-8aca-46ac-a80a-9bd25c8f06aa \
	--header "Content-Type: application/json; charset=UTF-8" \
    --header "Authorization: Bearer $AUTH_TOKEN"

11.10.3 Docker CLI

You can update a Vault using Docker CLI as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY \
	-e DATA_DIR=/data -v $PARENT_DIR/PlexPassData:/data plexpass \
    -j true --master-username frank --master-password *** \
    delete-vault --vault-id 44c0f4bc-8aca-46ac-a80a-9bd25c8f06aa

11.11 Creating an Account Data

11.11.1 Command Line

You can create an Account using CLI as follows:

./target/release/plexpass -j true --master-username eddie \
	--master-password *** --vault-id 44c0f4bc-8aca-46ac-a80a-9bd25c8f06aa \
    --label "My Bank Login" --username samuel --email "myemail@somewhere.com" \
    --url "https://mybank.com"  --category "Banking" \
    --password "***" --notes "Lorem ipsum dolor sit amet"

11.11.2 REST API

You can create an Account using REST API as follows:

curl -v -k -X POST https://localhost:8443/api/v1/vaults \
	--header "Content-Type: application/json; charset=UTF-8" \
    --header "Authorization: Bearer $AUTH_TOKEN" \
    -d '{"vault_id": "44c0f4bc-8aca-46ac-a80a-9bd25c8f06aa", "label": "Amazon", "username": "harry", "password": "**", "url": "https://www.amazon.com", "email": "harry@bitvault.com"}'

11.11.3 Docker CLI

You can create an Account using Docker CLI as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY \
	-e DATA_DIR=/data -v $PARENT_DIR/PlexPassData:/data plexpass \
	-j true --master-username eddie \
	--master-password *** --vault-id 44c0f4bc-8aca-46ac-a80a-9bd25c8f06aa \
    --label "My Bank Login" --username samuel --email "myemail@somewhere.com" \
    --url "https://mybank.com"  --category "Banking" \
    --password "***" --notes "Lorem ipsum dolor sit amet"

11.12 Querying Data

11.12.1 Command Line

You can query Accounts data using CLI as follows:

./target/release/plexpass -j true --master-username eddie \
	--master-password *** get-accounts \
    --vault-id 44c0f4bc-8aca-46ac-a80a-9bd25c8f06aa --q "amazon"

You can search your accounts based on username, email, categories, tags, label and description with above command. For example, above command will show all amazon accounts.

11.12.2 REST API

You can query Accounts using REST API as follows:

curl -v -k --header "Content-Type: application/json; charset=UTF-8" \
	--header "Authorization: Bearer $AUTH_TOKEN" \
    "https://localhost:8443/api/v1/vaults/44c0f4bc-8aca-46ac-a80a-9bd25c8f06aa/accounts?q=amazon" 

11.12.3 Docker CLI

You can create an Account using Docker CLI as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY \
	-e DATA_DIR=/data -v $PARENT_DIR/PlexPassData:/data plexpass \
	-j true --master-username eddie \
	--master-password *** get-accounts \
    --vault-id 44c0f4bc-8aca-46ac-a80a-9bd25c8f06aa --q "amazon"

11.13 Showing a specific Account by ID

11.13.1 Command Line

You can show a specific Account by its data using CLI as follows:

./target/release/plexpass -j true --master-username eddie \
	--master-password *** get-account --account-id $account_id

Which will show account details such as:

{
  "vault_id": "73b091ba-710b-4de4-8a1e-c185e3ddd304",
  "account_id": "e60321d2-8cad-42b8-b779-4ba5b8911bbf",
  "version": 2,
  "kind": "Login",
  "label": null,
  "favorite": false,
  "risk": "High",
  "description": null,
  "username": "bob",
  "password": "**",
  "email": "bob@bitvault.com",
  "url": "https://amazon.com",
  "category": "Shopping",
  "tags": ["Family"],
  "otp": null,
  "icon": null,
  "form_fields": {"CreditCard": "***"},
  "notes": null,
  "advisories": {
    "WeakPassword": "The password is MODERATE",
    "CompromisedPassword": "The password is compromised and found in 'Have I been Pwned' database."
  },
  "renew_interval_days": null,
  "expires_at": null,
  "credentials_updated_at": "2023-11-08T02:39:50.656771977",
  "analyzed_at": "2023-11-08T02:40:00.019194124",
  "password_min_uppercase": 1,
  "password_min_lowercase": 1,
  "password_min_digits": 1,
  "password_min_special_chars": 1,
  "password_min_length": 12,
  "password_max_length": 16,
  "created_at": "2023-11-08T02:39:50.657929166",
  "updated_at": "2023-11-08T02:40:00.020244928"
}

11.13.2 REST API

You can show Account by its ID using REST API as follows:

curl -v -k https://localhost:8443/api/v1/vaults/$vault_id/accounts/$account_id \
	--header "Content-Type: application/json; charset=UTF-8" \
    --header "Authorization: Bearer $AUTH_TOKEN"

11.13.3 Docker CLI

You can show an Account using Docker CLI as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY \
	-e DATA_DIR=/data -v $PARENT_DIR/PlexPassData:/data plexpass \
	-j true --master-username eddie \
	--master-password *** get-accounts \
    get-account --account-id $account_id

11.14 Updating an Account by ID

11.14.1 Command Line

You can update an Account data using CLI as follows:

./target/release/plexpass -j true --master-username eddie \
	--master-password ** update-account --vault-id $vault_id \
    --account-id $account_id --kind Logins \
    --label "My Bank" --username myuser --email "myemail@bitvault.com" \
    --password "**" --notes "Lorem ipsum dolor sit amet."

11.14.2 REST API

You can update an Account using REST API as follows:

curl -v -k https://localhost:8443/api/v1/vaults/$vault_id/accounts/$account_id \
	--header "Content-Type: application/json; charset=UTF-8" \
    --header "Authorization: Bearer $AUTH_TOKEN"

11.14.3 Docker CLI

You can update an Account using Docker CLI as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY \
	-e DATA_DIR=/data -v $PARENT_DIR/PlexPassData:/data plexpass \
	-j true --master-username eddie \
	--master-password *** get-accounts \
    get-account --account-id $account_id

11.13.4 UI

Following image illustrates how Account details can be viewed from the PlexPass web application:

Edit Account

11.15 Deleting an Account by ID

11.15.1 Command Line

You can delete an Account using CLI as follows:

./target/release/plexpass -j true --master-username eddie \
	--master-password ** delete-account --account-id $account_id

11.15.2 REST API

You can delete Account by its ID using REST API as follows:

curl -v -k -X DELETE https://localhost:8443/api/v1/vaults/$vault_id/accounts/$account_id \
	--header "Content-Type: application/json; charset=UTF-8" \
    --header "Authorization: Bearer $AUTH_TOKEN"

11.15.3 Docker CLI

You can delete an Account using Docker CLI as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY \
	-e DATA_DIR=/data -v $PARENT_DIR/PlexPassData:/data plexpass \
	-j true --master-username eddie \
	--master-password *** delete-account --account-id $account_id

11.16 Creating a Category

The categories are used to organize accounts in a Vault and PlexPass includes built-in categories. Here is how you can manage custom categories:

11.16.1 Command Line

You can create a Category using CLI as follows:

./target/release/plexpass -j true --master-username eddie \
	--master-password *** create-category --name "Finance"

11.16.2 REST API

You can create a Category using REST API as follows:

curl -v -k -X POST https://localhost:8443/api/v1/categories \
	--header "Content-Type: application/json; charset=UTF-8" \
    --header "Authorization: Bearer $AUTH_TOKEN"  -d '{"name": "Banking"}'

11.16.3 Docker CLI

You can create a Category using Docker CLI as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY \
	-e DATA_DIR=/data -v $PARENT_DIR/PlexPassData:/data plexpass \
	-j true --master-username eddie \
	--master-password *** create-category --name "Finance"

11.17 Showing Categories

11.17.1 Command Line

You can show all custom Categories using CLI as follows:

./target/release/plexpass -j true --master-username eddie \
	--master-password *** get-categories

11.17.2 REST API

You can show all custom categories using REST API as follows:

curl -v -k https://localhost:8443/api/v1/categories \
	--header "Content-Type: application/json; charset=UTF-8" \
    --header "Authorization: Bearer $AUTH_TOKEN"

11.17.3 Docker CLI

You can show all custom categories using Docker CLI as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY \
	-e DATA_DIR=/data -v $PARENT_DIR/PlexPassData:/data plexpass \
	-j true --master-username eddie \
	--master-password *** get-categories

11.18 Deleting a Category

11.18.1 Command Line

You can delete a Category using CLI as follows:

./target/release/plexpass -j true --master-username eddie \
	--master-password ** delete-category --name "Banking"

11.18.2 REST API

You can delete a category using REST API as follows:

curl -v -k -X DELETE https://localhost:8443/api/v1/categories/Gaming \
	--header "Content-Type: application/json; charset=UTF-8" \
    --header "Authorization: Bearer $AUTH_TOKEN"

11.18.3 Docker CLI

You can show all custom categories using Docker CLI as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY \
	-e DATA_DIR=/data -v $PARENT_DIR/PlexPassData:/data plexpass \
	-j true --master-username eddie \
	--master-password *** delete-category --name "Banking"

11.19 Generate Asymmetric Encryption Keys

11.19.1 Command Line

You can generate asymmetric encryption keys using CLI as follows:

./target/release/plexpass -j true --master-username eddie \
	--master-password ** generate-private-public-keys

You can optionally pass a seed password to generate secret and public keys. Here

11.19.2 REST API

You can generate asymmetric encryption keys using REST API as follows:

curl -v -k -X POST https://localhost:8443/api/v1/encryption/generate_keys \
	--header "Content-Type: application/json; charset=UTF-8" \
    --header "Authorization: Bearer $AUTH_TOKEN" -d '{}'

Here is a sample output:

{
  "secret_key": "7e28c2849a6316596014fea5cedf51d21236be47766b82ae35c62d2747a85bfe",
  "public_key": "04339d73ffd49da063d0518ea6661a81e92644c8571df57af3b522a7bcbcd3232f1949d2d60e3ecb096f4a5521453df30420e514c314de8c49cb6d7f5565fe8864"
}

11.19.3 Docker CLI

You can generate asymmetric encryption keys using Docker CLI as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY \
	-e DATA_DIR=/data -v $PARENT_DIR/PlexPassData:/data plexpass \
	-j true --master-username eddie \
	--master-password *** generate-private-public-keys

11.20 Asymmetric Encryption

11.20.1 Command Line

You can encrypt data using asymmetric encryption using CLI as follows:

./target/release/plexpass -j true --master-username eddie \
	--master-password *** asymmetric-encrypt --public-key $pub \
    --in-path plaintext.dat --out-path base64-encrypted.dat
./target/release/plexpass -j true --master-username eddie \
	--master-password *** asymmetric-decrypt \
    --secret-key $prv --in-path base64-encrypted.dat --out-path plaintext-copy.dat

In above example, you can first generate asymmetric keys and then encrypt a file using public key and then decrypt it using private key.

Alternatively, if you need to share a file with another user of PlexPass, you can use following command:

./target/release/plexpass -j true --master-username charlie --master-password *** \
    asymmetric-user-encrypt --target-username eddie --in-path accounts.csv --out-path enc_accounts.csv
./target/release/plexpass -j true --master-username eddie --master-password *** \
    asymmetric-user-decrypt --in-path enc_accounts.csv --out-path enc_accounts.csv

11.20.2 REST API

You can encrypt data using asymmetric encryption using REST API as follows:

curl -v -k -X POST https://localhost:8443/api/v1/encryption/asymmetric_encrypt/$pub \
	--header "Content-Type: application/json; charset=UTF-8" \
    --header "Authorization: Bearer $AUTH_TOKEN" \
    --data-binary "@plaintext.dat" > base64_encypted.dat

curl -v -k -X POST https://localhost:8443/api/v1/encryption/asymmetric_decrypt/$prv \
	--header "Content-Type: application/json; charset=UTF-8" \
    --header "Authorization: Bearer $AUTH_TOKEN" \
    --data-binary "@base64_encypted.dat" > plaintext-copy.dat

11.20.3 Docker CLI

You can encrypt data using asymmetric encryption using Docker CLI as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY -e DATA_DIR=/data \
	-v $PARENT_DIR/PlexPassData:/data -v $CWD:/files plexpass -j true \
    --master-username eddie --master-password *** asymmetric-encrypt --public-key $pub \
    --in-path plaintext.dat --out-path base64-encrypted.dat
    
docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY -e DATA_DIR=/data \
	-v $PARENT_DIR/PlexPassData:/data -v $CWD:/files plexpass -j true \
    --master-username eddie --master-password *** asymmetric-decrypt \
    --secret-key $prv --in-path base64-encrypted.dat --out-path plaintext-copy.dat

You can also share encrypted files among users of PlexPass as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY -e DATA_DIR=/data 
	-v $PARENT_DIR/PlexPassData:/data -v $CWD:/files plexpass -j true 
    --master-username eddie --master-password *** asymmetric-user-encrypt --target-username charlie
    --in-path plaintext.dat --out-path base64-encrypted.dat
    
docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY -e DATA_DIR=/data 
	-v $PARENT_DIR/PlexPassData:/data -v $CWD:/files plexpass -j true 
    --master-username charlie --master-password *** asymmetric-decrypt 
    --in-path base64-encrypted.dat --out-path plaintext-copy.dat

11.21 Symmetric Encryption

11.21.1 Command Line

You can encrypt data using symmetric encryption using CLI as follows:

./target/release/plexpass -j true --master-username eddie \
	--master-password *** symmetric-encrypt --secret-key $prv \
    --in-path plaintext.dat --out-path base64-encrypted.dat
./target/release/plexpass -j true --master-username eddie \
	--master-password *** asymmetric-decrypt \
    --secret-key $prv --in-path base64-encrypted.dat --out-path plaintext-copy.dat

In above example, you use the same symmetric key or password to encrypt and decrypt data.

11.21.2 REST API

You can encrypt data using symmetric encryption using REST API as follows:

curl -v -k -X POST https://localhost:8443/api/v1/encryption/symmetric_encrypt/$prv \
	--header "Content-Type: application/json; charset=UTF-8" \
    --header "Authorization: Bearer $AUTH_TOKEN" \
    --data-binary "@plaintext.dat" > base64_encypted.dat

curl -v -k -X POST https://localhost:8443/api/v1/encryption/symmetric_decrypt/$prv \
	--header "Content-Type: application/json; charset=UTF-8" \
    --header "Authorization: Bearer $AUTH_TOKEN" \
    --data-binary "@base64_encypted.dat" > plaintext-copy.dat

11.21.3 Docker CLI

You can encrypt data using symmetric encryption using Docker CLI as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY -e DATA_DIR=/data \
	-v $PARENT_DIR/PlexPassData:/data -v $CWD:/files plexpass -j true \
    --master-username eddie --master-password *** symmetric-encrypt --secret-key $prv \
    --in-path plaintext.dat --out-path base64-encrypted.dat
    
docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY -e DATA_DIR=/data \
	-v $PARENT_DIR/PlexPassData:/data -v $CWD:/files plexpass -j true \
    --master-username eddie --master-password *** symmetric-decrypt \
    --secret-key $prv --in-path base64-encrypted.dat --out-path plaintext-copy.dat

11.22 Importing and Exporting Accounts Data

11.22.1 Command Line

Given a CSV file with accounts data such as:

Type,name,url,username,password,note,totp,category
Login,Youtube,https://www.youtube.com/,youlogin,youpassword,tempor incididunt ut labore et,,
Login,Amazon,https://www.amazon.com/,amlogin1,ampassword1,sit amet, consectetur adipiscing,,
Login,Bank of America ,https://www.boa.com/,mylogin3,mypassword3,Excepteur sint occaecat cupidatat non,,
Login,Twitter,https://www.twitter.com/,mylogin3,mypassword3,eiusmod tempor incididunt ut,,
Login,AT&T,https://www.att.com/,mylogin4,mypassword4,mynote4,,
Secure Note,Personal Note name,,,,My Secure Note,,

You can import accounts data from a CSV file using CLI as follows:

./target/release/plexpass -j true --master-username eddie \
	--master-password *** import-accounts --vault-id $vault_id \
    --in-path accounts.csv

You can also export accounts data as follows:

./target/release/plexpass -j true --master-username eddie \
	--master-password *** export-accounts --vault-id $vault_id \
    --password *** --out-path encrypted-accounts.dat

In above example, the exported data will be encrypted with given password and you can use symmetric encryption to decrypt it or import it later as follows:

./target/release/plexpass -j true --master-username eddie \
	--master-password *** import-accounts --vault-id $vault_id \
    --password ** --in-path encrypted-accounts.dat

11.22.2 REST API

You can import accounts data from a CSV file using REST API as follows:

curl -v -k -X POST https://localhost:8443/api/v1//vaults/$vault_id/import \
	--header "Content-Type: application/json; charset=UTF-8" \
    --header "Authorization: Bearer $AUTH_TOKEN" \
    --data-binary "@accounts.csv" -d '{}'

You can then export accounts data as an encrypted CSV file as follows:

curl -v -k --http2 --sslv2 -X POST "https://localhost:8443/api/v1//vaults/$vault_id/export" \
	--header "Content-Type: application/json; charset=UTF-8" \
    --header "Authorization: Bearer $AUTH_TOKEN" -d '{"password": "**"}' > encrypted_csv.dat

And then import it back later as follows:

curl -v -k -X POST https://localhost:8443/api/v1//vaults/$vault_id/import \
	--header "Content-Type: application/json; charset=UTF-8" \
    --header "Authorization: Bearer $AUTH_TOKEN" --data-binary "@encrypted_csv.dat" \
    -d '{"password": "***"}'

11.22.3 Docker CLI

You can import accounts data from a CSV file using Docker CLI as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY -e DATA_DIR=/data \
	-v $PARENT_DIR/PlexPassData:/data -v $CWD:/files plexpass \
    -j true --master-username frank --master-password *** import-accounts \
    --vault-id $vault_id --in-path /files/accounts.csv

And export it without password as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY -e DATA_DIR=/data \
	-v $PARENT_DIR/PlexPassData:/data -v $CWD:/files plexpass -j true \
    --master-username frank --master-password *** export-accounts \
    --vault-id $vault_id --out-path /files/plaintext.csv

11.23 Generating Strong Password

11.23.1 Command Line

You can generate a strong password using CLI as follows:

./target/release/plexpass -j true --master-username eddie \
	--master-password ** generate-password

That generates a memorable or random password such as:

{"password": "**"}

11.23.2 REST API

You can generate a strong password using REST API as follows:

curl -v -k --header "Content-Type: application/json; charset=UTF-8" \
	--header "Authorization: Bearer $AUTH_TOKEN" \
    https://localhost:8443/api/v1//password/memorable -d '{}'
    
curl -v -k --header "Content-Type: application/json; charset=UTF-8" \
	--header "Authorization: Bearer $AUTH_TOKEN" \
    https://localhost:8443/api/v1//password/random -d '{}'

You can optionally specify password policies with following properties:

-d '{
  "password_min_uppercase": 1,
  "password_min_lowercase": 1,
  "password_min_digits": 1,
  "password_min_special_chars": 1,
  "password_min_length": 12,
  "password_max_length": 16,
}'

11.23.3 Docker CLI

You can generate a strong password using Docker CLI as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY -e DATA_DIR=/data \
	-v $PARENT_DIR/PlexPassData:/data plexpass -j true --master-username frank \
    --master-password ** generate-password

11.24 Checking if a Password is Compromised

11.24.1 Command Line

You can check if a password is compromised using CLI as follows:

./target/release/plexpass -j true --master-username eddie \
	--master-password ** password-compromised --password **

That returns as a flag as follows:

{"compromised":true}

11.24.2 REST API

You can check if a password is compromised using REST API as follows:

curl -v -k --header "Content-Type: application/json; charset=UTF-8" \
	--header "Authorization: Bearer $AUTH_TOKEN" \
    https://localhost:8443/api/v1/password/***/compromised

11.24.3 Docker CLI

You can check if a password is compromised using Docker CLI as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY -e DATA_DIR=/data \
	-v $PARENT_DIR/PlexPassData:/data -v $CWD:/files plexpass \
    -j true --master-username frank --master-password *** password-compromised --password **

11.25 Checking Strength of a Password

11.25.1 Command Line

You can check strength of a password using CLI as follows:

./target/release/plexpass -j true --master-username eddie \
	--master-password ** password-strength --password **

That returns as password properties as follows:

{
  "strength": "MODERATE",
  "entropy": 42.303957463269825,
  "uppercase": 0,
  "lowercase": 10,
  "digits": 0,
  "special_chars": 0,
  "length": 10
}

11.25.2 REST API

You can check strength of a password using REST API as follows:

curl -v -k --header "Content-Type: application/json; charset=UTF-8" \
	--header "Authorization: Bearer $AUTH_TOKEN" \
    https://localhost:8443/api/v1/password/**/strength

11.25.3 Docker CLI

You can check strength of a password using Docker CLI as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY -e DATA_DIR=/data \
	-v $PARENT_DIR/PlexPassData:/data plexpass -j true --master-username frank \
    --master-password *** password-strength --password ***

11.26 Checking if Email is Compromised

11.26.1 Command Line

PlexPass integrates with https://haveibeenpwned.com/ and you can check if an emaill or website is compromised if you have an API key from the website. Here is how you can check if email is compromised using CLI as follows:

./target/release/plexpass -j true --master-username eddie \
	--master-password Cru5h_rfIt:v_Bk email-compromised \
    --email myemail@bitvault.com

This would return an error if you haven’t configured the API key, e.g.

could not check password: Validation { message: "could not find api key for HIBP", reason_code: None }

11.26.2 REST API

You can check if an email is compromised using REST API as follows:

curl -v -k --header "Content-Type: application/json; charset=UTF-8" \
	--header "Authorization: Bearer $AUTH_TOKEN" \
    https://localhost:8443/api/v1/emails/email/compromised

11.26.3 Docker CLI

You can check if an email is compromised using Docker CLI as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY -e DATA_DIR=/data \
	-v $PARENT_DIR/PlexPassData:/data plexpass -j true --master-username frank \
    --master-password ** email-compromised --email myemail@mail.com

Note: The Web UI highlights the accounts with red background that are using compromised or weak passwords and shows advisories such as:

11.27 Analyzing Passwords for a Vault

11.27.1 Command Line

PlexPass integrates with https://haveibeenpwned.com/ and checks for strength, similarity, and password reuse. Here is how you can analyze all passwords in a vault using CLI as follows:

./target/release/plexpass -j true --master-username eddie \
	--master-password ** analyze-vault-passwords --vault-id $vault_id

This would return return summary of the analysis:

{
  "total_accounts": 35,
  "count_strong_passwords": 1,
  "count_moderate_passwords": 20,
  "count_weak_passwords": 14,
  "count_healthy_passwords": 1,
  "count_compromised": 32,
  "count_reused": 29,
  "count_similar_to_other_passwords": 22,
  "count_similar_to_past_passwords": 0
}

Each acount will be updated with advisories based on the analysis.

11.27.2 REST API

You can analyze all accounts in a Vault using REST API as follows:

curl -v -k -X POST --header "Content-Type: application/json; charset=UTF-8" \
	--header "Authorization: Bearer $AUTH_TOKEN" \
    https://localhost:8443/api/v1/vaults/$vault_id/analyze_passwords

11.27.3 Docker CLI

You can analyze all accounts in a Vault using Docker CLI as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY -e DATA_DIR=/data \
	-v $PARENT_DIR/PlexPassData:/data plexpass -j true --master-username frank \
    --master-password ** analyze-vault-passwords --vault-id $vault_id

11.28 Analyzing Passwords for all Vaults

11.28.1 Command Line

Here is how you can analyze passwords in all vaults using CLI as follows:

./target/release/plexpass -j true --master-username eddie \
	--master-password ** analyze-all-vaults-passwords

This would return return summary of the analysis:

{ 
  "vault-id" :{
    "total_accounts": 35,
    "count_strong_passwords": 1,
    "count_moderate_passwords": 20,
    "count_weak_passwords": 14,
    "count_healthy_passwords": 1,
    "count_compromised": 32,
    "count_reused": 29,
    "count_similar_to_other_passwords": 22,
   "count_similar_to_past_passwords": 0
 }
}

Each acount will be updated with advisories based on the analysis.

11.28.2 REST API

You can analyze accounts in all Vaults using REST API as follows:

curl -v -k -X POST --header "Content-Type: application/json; charset=UTF-8" \
	--header "Authorization: Bearer $AUTH_TOKEN" \
    https://localhost:8443/api/v1/password/analyze_all_passwords

11.28.3 Docker CLI

You can analyze accounts in all Vaults using Docker CLI as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY -e DATA_DIR=/data \
	-v $PARENT_DIR/PlexPassData:/data plexpass -j true --master-username frank \
    --master-password ** analyze-all-vaults-passwords

11.29 Searching Usernames

PlexPass allows searching usernames for sharing data, however by default this feature is only allowed if accessed from a trusted local network.

11.29.1 Command Line

You can search usernames using CLI as follows:

./target/release/plexpass -j true --master-username eddie \
	--master-password ** search-usernames --q ali

11.29.2 REST API

You can search usernames using REST API as follows:

curl -v -k --header "Content-Type: application/json; charset=UTF-8" \
	--header "Authorization: Bearer $AUTH_TOKEN" https://localhost:8443/api/v1/usernames?q=ali

11.29.3 Docker CLI

You can search usernames using Docker CLI as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY -e DATA_DIR=/data \
	-v $PARENT_DIR/PlexPassData:/data plexpass -j true --master-username frank \
    --master-password *** search-usernames --q ali

11.30 Sharing and Unsharing a Vault with another User

PlexPass allows sharing a vault with another user for read-only or read/write access to view or edit all accounts in the Vault.

11.30.1 Command Line

You can share a Vault with another user using CLI as follows:

./target/release/plexpass -j true --master-username eddie \
	--master-password *** share-vault --vault-id $vault_id --target-username frank

You can also unshare a Vault with another user using CLI as follows:

./target/release/plexpass -j true --master-username eddie 
	--master-password *** unshare-vault --vault-id $vault_id --target-username frank

Vault and Account sharing within the system leverages public key infrastructure (PKI) for secure data exchange. This process involves encrypting the encryption keys of the Vault using the intended recipient user’s public key. A message containing this encrypted data is then sent to the recipient. Upon the recipient user’s next sign-in, this data is decrypted and subsequently re-encrypted using the recipient’s public key, ensuring secure access and transfer of information.

11.30.2 REST API

You can share or unshare a Vault with another user using REST API as follows:

curl -v -k --header "Content-Type: application/json; charset=UTF-8" \
	--header "Authorization: Bearer $AUTH_TOKEN" \
    https://localhost:8443/api/v1/vaults/$vault_id/share \
    -d '{"target_username": "frank"}'
curl -v -k --header "Content-Type: application/json; charset=UTF-8" \
	--header "Authorization: Bearer $AUTH_TOKEN" \
    https://localhost:8443/api/v1/vaults/$vault_id/unshare \
    -d '{"target_username": "frank"}'

11.30.3 Docker CLI

You can share and unshare a vault using Docker CLI as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY -e RUST_BACKTRACE=1 \
	-e DATA_DIR=/data -v $PARENT_DIR/PlexPassData:/data plexpass -j true \
    --master-username frank --master-password ** share-vault \
    --vault-id $vault_id --target-username charlie
docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY -e RUST_BACKTRACE=1 \
	-e DATA_DIR=/data -v $PARENT_DIR/PlexPassData:/data plexpass -j true \
    --master-username frank --master-password ** unshare-vault \
    --vault-id $vault_id --target-username charlie

11.30.4 Web UI

You can share using Web UI as follows:

share vault

11.31 Sharing an Account with another User

PlexPass allows sharing an account by sending an encrypted account data, which uses target user’s public key so that target user can decrypt it.

11.31.1 Command Line

You can share an Account with another user using CLI as follows:

./target/release/plexpass -j true --master-username eddie \
	--master-password *** share-account --vault-id $vault_id \
    --account-id $account_id --target-username frank

11.31.2 REST API

You can share an Account with another user using REST API as follows:

curl -v -k --header "Content-Type: application/json; charset=UTF-8" \
	--header "Authorization: Bearer $AUTH_TOKEN" \
    https://localhost:8443/api/v1/vaults/$vault_id/accounts/$account_id/share \
    -d '{"target_username": "frank"}'

11.31.3 Docker CLI

You can share an Account with another using Docker CLI as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY -e RUST_BACKTRACE=1 \
	-e DATA_DIR=/data -v $PARENT_DIR/PlexPassData:/data plexpass -j true \
    --master-username frank --master-password ** share-vault \
    --vault-id $vault_id --target-username charlie

11.32 Generating OTP code

PlexPass allows generating OTP code based on base-32 secret.

11.32.1 Command Line

You can generate otp for a particular account using based on CLI as follows:

./target/release/plexpass -j true --master-username eddie --master-password *** generate-account-otp --account-id $account_id

or using secret as follows:

./target/release/plexpass -j true --master-username eddie --master-password *** generate-otp --otp-secret "JBSWY3DPEHPK3PXP"

An OTP is also defined automatically for each user and You can generate otp for the user using based on CLI as follows:

./target/release/plexpass -j true --master-username eddie --master-password *** generate-user-otp 

11.32.2 REST API

The OTP will be included with account API if you have previously setup otp-secret, e.g.,

curl -v -k --header "Content-Type: application/json; charset=UTF-8" 
	--header "Authorization: Bearer $AUTH_TOKEN" 
    https://localhost:8443/api/v1/vaults/$vault_id/accounts/$account_id

You can generate an otp for a specific account using:

curl -v -k --header "Content-Type: application/json; charset=UTF-8" 
	--header "Authorization: Bearer $AUTH_TOKEN" 
    https://localhost:8443/api/v1/accounts/{account-id}/otp/generate 

or, generate otp using a secret

curl -v -k --header "Content-Type: application/json; charset=UTF-8" 
	--header "Authorization: Bearer $AUTH_TOKEN" 
    https://localhost:8443/api/v1/otp/generate -d '{"otp_secret": "***"}'

11.32.3 Docker CLI

You can generate otp for a particular account using based on Docker CLI as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY -e RUST_BACKTRACE=1 \
	-e DATA_DIR=/data -v $PARENT_DIR/PlexPassData:/data plexpass -j true \
    --master-username eddie --master-password *** generate-account-otp \
    --account-id $account_id

or using secret as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY -e RUST_BACKTRACE=1 \
	-e DATA_DIR=/data -v $PARENT_DIR/PlexPassData:/data plexpass -j true \
    --master-username eddie --master-password *** generate-account-otp \
    --otp-secret "**"

Similarly, you can generate user-otp as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY -e RUST_BACKTRACE=1 \
	-e DATA_DIR=/data -v $PARENT_DIR/PlexPassData:/data plexpass -j true  \
    --master-username eddie --master-password *** generate-user-otp

11.33 Resetting multi-factor authentication

PlexPass Web application allows registering security keys for multi-factor authentication but you can reset it using recovery codes as follows:

11.33.1 Command Line

You can reset multi-factor authentication using based on CLI as follows:

First retrieve otp-code based on user’s otp-secret (that can be viewed in the Web UI or from previous API/CLI):

otp=`./target/release/plexpass -j true generate-otp --otp-secret ***|jq '.otp_code'`

Then use the otp and recovery-code to reset the multi-factor-authentication as follows

./target/release/plexpass -j true --master-username charlie \
	--master-password *** --otp-code $otp reset-multi-factor-authentication \
    --recovery-code ***

11.33.2 REST API

First generate OTP with otp-secret such as:

curl -k --header "Content-Type: application/json; charset=UTF-8" \
	https://localhost:8443/api/v1/otp/generate -d '{"otp_secret": "**"}'

which would return otp, e.g.,

{"otp_code":361509}

Then signin with username, password and otp-code:

curl -v -k https://localhost:8443/api/v1/auth/signin 
	--header "Content-Type: application/json; charset=UTF-8" 
    -d '{"username": "bob", "master_password": "**", "otp_code": 123}'

The signin API will return access token in the response header and you can then use it for resetting multi-factor settings:

curl -v -k --header "Content-Type: application/json; charset=UTF-8" 
	--header "Authorization: Bearer $AUTH_TOKEN" 
    https://localhost:8443/api/v1/auth/reset_mfa -d '{"recovery_code": "***"}'

11.33.3 Docker CLI

First retrieve otp-code based on user’s otp-secret (that can be viewed in the Web UI or from previous API/CLI):

otp=`docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY -e RUST_BACKTRACE=1 \
	-e DATA_DIR=/data -v $PARENT_DIR/PlexPassData:/data plexpass -j true \
    generate-otp --otp-secret ***|jq '.otp_code'`

Then use the otp and recovery-code to reset the multi-factor-authentication as follows:

docker run -e DEVICE_PEPPER_KEY=$DEVICE_PEPPER_KEY -e RUST_BACKTRACE=1 \
	-e DATA_DIR=/data -v $PARENT_DIR/PlexPassData:/data plexpass -j true \
    --master-username charlie --master-password *** \
    --otp-code $otp reset-multi-factor-authentication --recovery-code ***

11.33.4 Web UI

When registering a security key, PlexPass will display recovery codes to reset multi-factor authentication if you lose your security key and you can reset in the Web application upon signin, e.g.,

Recovering multi-factor authentication keys

11.34 Security Dashboad and Auditing

The PlexPass web application includes a security dashboard to monitor health of all passwords and allows users to view audit logs for all changes to their accounts, e.g.,

Security Dashboard

12.0 Summary

The design principles and architectural framework outlined above showcase PlexPass’s advanced capabilities in password management, setting it apart from conventional cloud-based password managers. The key advantages of PlexPass include:

  1. End-to-End Encryption and Zero-Knowledge Architecture: By encrypting all data with strong algorithms and ensuring that decryption happens only on the user’s device, PlexPass provides a high level of security. The zero-knowledge architecture means that it assumes no trust when accessing secured user data.
  2. Local Data Storage and Management: With no reliance on cloud storage, PlexPass reduces the risk of data breaches and privacy concerns associated with cloud services.
  3. Advanced Cryptographic Techniques: PlexPass’s use of Argon2 for password hashing, AES-256 for symmetric encryption, and ECC for asymmetric encryption, coupled with envelope encryption, positions it at the forefront of modern cryptographic practices.
  4. User-Friendly Experience with Strong Security Practices: Despite its focus on security, PlexPass promises a great user experience through its command-line tool and web-based UI.
  5. Open Source with Regular Updates: PlexPass is open-source that allows for community scrutiny, which can lead to the early detection and rectification of vulnerabilities.
  6. Physical Security Considerations and Data Breach Alerts: PlexPass analyzes passwords for breaches, weak strength, similarity with other passwords and provides a dashboard for monitoring password security.
  7. Multi-Device and Secure Sharing Features: The ability to share passwords securely with nearby trusted devices without cloud risks, and the support for multi-device use, make it versatile and family-friendly.
  8. Strong Master Password and Password Generation: Encouraging strong master passwords and providing tools for generating robust passwords further enhance individual account security.
  9. Detailed Domain Model with Advanced Data Storage and Network Communication: PlexPass’s detailed model covers all aspects of password management and security, ensuring thorough protection at each level.
  10. Local Control and Privacy: With PlexPass, all data is stored locally, providing users with full control over their password data. This is particularly appealing for those who are concerned about privacy and don’t want their sensitive information stored on a cloud server.
  11. Customization and Flexibility: PlexPass can be customized to fit specific needs and preferences. Users who prefer to have more control over the configuration and security settings may find PlexPass more flexible than cloud-based solutions.
  12. Cost Control: Hosting your own password manager might have cost benefits, as you avoid ongoing subscription fees associated with many cloud-based password managers.
  13. Transparency and Trust: PlexPass is open-source, users can inspect the source code for any potential security issues, giving them a higher degree of trust in the application.
  14. Reduced Attack Surface: By not relying on cloud connectivity, offline managers are not susceptible to online attacks targeting cloud storage.
  15. Control over Data: Users have complete control over their data, including how it’s stored and backed up.
  16. Potentially Lower Risk of Service Shutdown: Since the data is stored locally, the user’s access to their passwords is not contingent on the continued operation of a third-party service.
  17. Multi-Factor and Local Authentication: PlexPass supports Multi-Factor Authentication based on One-Time-Passwords (OTP), FIDO, WebAuthN, and YubiKey for authentication.

In summary, PlexPass, with its extensive features, represents a holistic and advanced approach to password management. You can download it freely from https://github.com/bhatti/PlexPass and provide your feedback.

September 17, 2023

Building a Hybrid Authorization System for Granular Access Control

Filed under: API,Authorization — admin @ 3:46 pm

An access control system establishes a structure to manage the accessibility of resources within an organization or digital environment. It aims to prevent unauthorized individuals or entities from accessing data or taking actions outside their designated privileges. Such systems can govern physical entry points—like determining who can access a building—or digital ones—such as delineating permissions on a computer network or software platform. Key components of an access control system include:

  1. Authentication: It is the process of verifying the identity of a user, application, or system. This process ensures that the entity requesting access is who or what it claims to be. Common methods of authentication include username and password, multi-factor authentication, biometric verification, token-based authentication, and certificate-based authentication.
  2. Authorization: It determines the level of access, or permissions, granted to a legitimately authenticated user or system. Essentially, it answers the question: “What is this authenticated entity allowed to do or see within the system?”.
  3. Audit and Monitoring: It refer to the systematic tracking, recording, and analysis of activities or events within a system or network. These activities often include user actions, system accesses, and operations that affect data and resources. The primary goals are to ensure compliance with established policies, detect unauthorized or abnormal activities, and facilitate the identification of vulnerabilities or weaknesses. Elements often involved in audit and monitoring can include log files, real-time monitoring, alerts and notification, data analytics, and compliance reporting.
  4. Policy Management: It involves the creation, maintenance, and enforcement of rules, guidelines, and standard operating procedures that govern the behavior of users and systems within an organization or environment. These policies may include access policies, security policies, operational policies, compliance policies, change management policies, and policy auditing.

In this article, we will focus on authorization, which may use following popular mechanisms for enforcing access control:

  1. Role-Based Access Control (RBAC): In RBAC, permissions are associated with roles, and users are assigned to these roles. For example, a “Manager” role might have the ability to add or remove employees from a system, while an “Employee” role might only be able to view information. When a user is assigned a role, they inherit all the permissions that come with it.
  2. Attribute-Based Access Control (ABAC): ABAC is a more flexible and complex system that uses attributes as building blocks in a rule-based approach to control access. These attributes can be associated with the user (e.g., age, department, job role), action (e.g., read, write), resource (e.g., file type, location), or even environmental factors (e.g., time of day, network security level). Policies are then crafted to allow or deny actions based on these attributes.
  3. Policy-Based Access Control (PBAC): PBAC is similar to ABAC but tends to be more dynamic, incorporating real-time information into its decision-making process. For example, a PBAC system might evaluate current network threat levels or the outcome of a risk assessment to determine whether access should be granted or denied. Policies can be complex, allowing for a high degree of flexibility and context-aware decisions.
  4. Access Control Lists (ACLs): A list specifying what actions a user or system can or cannot perform.
  5. Capabilities: In a capability-based security model, permissions are attached to tokens (capabilities) rather than to subjects (e.g., users) or objects (e.g., files). These tokens can be passed around between users and systems. Having a token allows a user to access a resource or perform an action. This model decentralizes the control of access, making it flexible but also potentially harder to manage at scale.
  6. Permissions: This is a simple and straightforward model where each object (like a file or database record) has associated permissions that specify which users can perform which types of operations (read, write, delete, etc.). This is often seen in file systems where each file and directory has an associated set of permission flags.
  7. Discretionary Access Control (DAC): In DAC models, the owner of the resource has the discretion to set its permissions. For example, in many operating systems, the creator of a file can decide who can read or write to that file.
  8. Mandatory Access Control (MAC): Unlike DAC, where users have some discretion over permissions, in MAC, the system enforces policies that users cannot alter. These policies often use labels or classifications (e.g., Top Secret, Confidential) to determine who can access what.

The approaches to authorization are not mutually exclusive and can be integrated to form hybrid systems. For instance, an enterprise might rely on RBAC for broad-based access management, while also employing ABAC or PBAC to handle more nuanced or sensitive use-cases. The remainder of this article will concentrate on the design and implementation of such hybrid authorization systems.

Industry Standards

Following are popular industry standards to provide a common framework for the design, implementation, and management of security policies across different systems:

  • OAuth 2.0: IETF (Internet Engineering Task Force) standard to provide delegated access without sharing credentials.
  • OpenID Connect (OIDC): OpenID Foundation standard that layers on on top of OAuth 2.0, primarily used for authentication but often used in conjunction with authorization.
  • Security Assertion Markup Language (SAML): OASIS standard to exchange authentication and authorization information between parties.
  • NIST Role-Based Access Control (RBAC): to manage permissions based on user roles.
  • JSON Web Token (JWT): IETF standard to encode claims between two parties, which is used with OAuth and OIDC standards.

These standards often complement each other and can be used in combination to build robust, secure, and flexible authorization mechanisms.

Popular Authorization Systems

Various open-source and commercial authorization systems are available to cater to different needs, from simple role-based systems to complex policy-driven solutions. Here are some popular open-source authorization systems:

  • Open Policy Agent (OPA): A general-purpose policy engine that enables fine-grained, context-aware access control across the stack.
  • Casbin: A powerful, efficient, and lightweight access control library that supports various access control models.
  • Authelia: A single sign-on (SSO) and two-factor authentication server.
  • Keycloak: Offers integrated SSO and IDM for browser apps and RESTful web services, along with extensive authorization capabilities.
  • Apache Shiro: A Java security framework that performs authentication, authorization, cryptography, and session management.
  • Spring Security: Provides comprehensive security features for Java applications, including robust access control capabilities.
  • FreeIPA: An integrated Identity and Authentication solution for Linux/UNIX networked environments.
  • Pomerium: An identity-aware proxy that enables secure access to internal applications.
  • ORY: A set of cloud-native identity infrastructure components, which include ORY Keto for access control.
  • PlexRBAC: An open-source RBAC implementation that I wrote back in 2010 using Java language.
  • PlexRBACJS: My open-source RBAC implementation using JavaScript language.
  • SaasRBAC: My open-source RBAC implementation using Rust language.

Following are commercial offerings for authorization systems:

  • AWS IAM and Cognito: Amazon Web Services offers these services for identity and access management, both within AWS and for apps using AWS backend services.
  • Amazon Verified Permissions: Amazon Verified Permissions is a scalable, fine-grained permissions management and authorization service for custom applications.
  • Okta: Provides a wide range of identity and access management solutions including strong authorization controls.
  • Microsoft Azure AD: Offers identity services and access management through Azure’s cloud platform.
  • Cyral: Focuses on data layer authorization, especially for data clouds and data warehouses.
  • OneLogin: Provides unified access management, making it easier to secure connections across users and devices.
  • Ping Identity: Provides solutions for both workforce and customer identity types.
  • RSA SecurID Suite: Offers highly secure and flexible access control, including role-based and policy-driven controls.
  • Saviom: Specializes in role-based access control for resource scheduling and project portfolio management.
  • SailPoint: Offers intelligent identity management solutions, including fine-grained entitlement management and policy enforcement.
  • ForgeRock: An identity management solution designed for consumer-facing applications, with extensive support for access management and federation.
  • Idaptive (now part of CyberArk): Provides end-to-end identity automation and adaptive security.

Another noteworthy authorization solution is Google’s Zanzibar. While not available as an open-source or commercial product, Google has released a whitepaper outlining its architecture and principles. Zanzibar is engineered to meet the demands of large, intricate systems and is capable of processing millions of queries per second. The aforementioned authorization systems offer various configurations and customizations to meet an organization’s particular needs. We plan to draw from the design elements of these existing systems to create a robust and versatile authorization framework.

Design Tenets

Our authorization system will use a hybrid approach combining Role-Based Access Control (RBAC), Attribute-Based Access Control (ABAC), and Relationship-Based Access Control (ReBAC) would incorporate various features inspired by the likes of OPA, AWS IAM, Google Zanzibar, and my previous implementations of similar authentication systems. Following are primary design tenets for building such authorization systems:

  • Scalability: Capable of handling a large number of authorization requests per second and expand to accommodate growing numbers of users and resources.
  • Flexibility: Supports RBAC, ABAC, and ReBAC, allowing for the handling of various scenarios.
  • Fine-grained Control: Context aware such as time, location and real-time data, and can decide based on multiple attributes of the subject, object, and environment.
  • Auditing and Monitoring: Detailed logs for all access attempts and policy changes, and real-time insights into access patterns, possibly with alerting for suspicious activities.
  • Security: Applies least privilege, and enforces data masking and redaction.
  • Usability: Easy-to-use interfaces for assigning, changing, and revoking roles.
  • Extensibility: Comprehensive APIs for integration with other systems and services ability to run custom code during the authorization process.
  • Reliability: have minimal downtime with backup and recovery.
  • Compliance: adhere to regulatory requirements like GDPR, HIPAA, etc. and track changes to policies for auditing purposes.
  • Multi-Tenancy: support multiple services and products with a variety of authorization models under a single unified system.
  • Policy Versioning and Namespacing: allow multiple versions and namespaces of policies, making it possible to manage complex policy changes.
  • Balance between Expressive and Performance: provide a good balance with expressive policies offered by OPA and high performance offered by Zanzibar.
  • Policy Validation: check against invalid, unsafe or ambiguous policies and prevent users from making accidental mistakes.
  • Performance Optimization: using cache, indexing, parallel processing, lazy evaluation, rule simplifications, automated reasoning, decision trees and other optimization techniques to improve performance of the system.

Authorization Concepts

Following is a list of high-level data model concepts that are typically used in the authorization systems:

Principal

The entity (which could be a user, system, or another service) that is making the request. Principals are often authenticated before they are authorized to perform an action.

Subject

Similar to a principal, the subject refers to the entity that is attempting to access a particular resource. In some contexts, a subject may represent a real person that has multiple principal identities for various systems.

Permission

An action that a principal is allowed to perform on a particular resource. For example, reading a file, updating a database record, or deleting an account.

Claim

A statement made by the principal, usually after authentication, that annotates the principal with specific attributes (e.g., username, roles, permissions). Claims are often used in token-based authentication systems like JWT to carry information about the principal.

Role

A named collection of permissions that can be assigned to a principal. Roles simplify the management of permissions by grouping them together under a single label.

Group

A collection of principals that are treated as a single unit for the purpose of granting permissions. For example, an “Admins” group might be given a role that allows them to perform administrative actions.

Access Policy

A set of rules that define the conditions under which a particular action is allowed or denied. Policies can be simple (“Admins can do anything”) or complex (“Users can edit a document only if they are the creator and the document is in ‘Draft’ status”).

Relation

Relations define how different entities are connected. For instance, a “user” can have a “memberOf” relation with a “group”, or a “document” can have an “ownedBy” relation with a “user”.

Resource

The object that the principal wants to access (e.g., a file, a database record).

Context

Additional situational information (e.g., IP address, time of day) that might influence the authorization decision.

Namespace

Each namespace could serve as a container for a set of resources, roles, and permissions.

Scope or Realm

This often refers to the level or context in which a permission is granted. For instance, in OAuth, scopes are used to specify what access a token grants the user, like “read-only” access to a particular resource.

Rule

A specific condition or criterion in a policy that dictates whether access should be granted or denied.

Dynamic Conditions

Dynamic conditions or predicates are expressions that must be evaluated at runtime to determine if access should be granted or denied. Dynamic conditions consists of attributes, operators and values, e.g.,

if (principal.role == "employee" AND principal.status == "active") OR (time < "17:00") then ALLOW

if (principal.role == "admin") OR (document.owner == principal.id) then ALLOW

if IP_address in [allowed_IPs] then ALLOW

if time >= 09:00 AND time <= 17:00 then ALLOW

Authorization Data Model

Following data model is defined in Protocol Buffers definition language based on above authorization concepts:

Data Model for Hybrid Authorization

Organization

The Organization abstracts a boundary of authorization data and it can have multiple namespaces for different security realms or segments of security domains. Here is the definition of Organization:

message Organization {
  // ID unique identifier assigned to this organization.
  string id = 1;
  // Version
  int64 version = 2;
  // Name of organization.
  string name = 3;
  // Allowed Namespaces for organization.
  repeated string namespaces = 4;
  // url for organization.
  string url = 5;
  // Optional parent ids.
  repeated string parent_ids = 6;
}

Principal

The Principal abstracts subject who is making an authorization request to perform an action on a target resource based on access rules and dynamic conditions. A Principal belongs to an organization and can be associated with groups, roles (RBAC), permissions and relationships (ReBAC). The Principal defines following properties:

message Principal {
  // ID unique identifier assigned to this principal.
  string id = 1;
  // Version
  int64 version = 2;
  // OrganizationId of the principal user.
  string organization_id = 3;
  // Allowed Namespaces for principal, should be subset of namespaces in organization.
  repeated string namespaces = 4;
  // Username of the principal user.
  string username = 5;
  // Email of the principal user.
  string email = 6;
  // Name of the principal user.
  string name = 7;
  // Attributes of principal
  map<string, string> attributes = 8;
  // Groups that the principal belongs to.
  repeated string group_ids = 9;
  // Roles that the principal belongs to.
  repeated string role_ids = 10;
  // Permissions that the principal belongs to.
  repeated string permission_ids = 11;
  // Relationships that the principal belongs to.
  repeated string relation_ids = 12;
}

Resource and ResourceInstance

The Resource represents target object for performing an action and checking an access rules policy. A resource can also be used to represent an object with a quota that can be allocated or assigned based on access policies. Here is a definition of Resource and ResourceInstance:

message Resource {
  // ID unique identifier assigned to this resource.
  string id = 1;
  // Version
  int64 version = 2;
  // Namespace for resource.
  string namespace = 3;
  // Name of the resource.
  string name = 4;
  // capacity of resource.
  int32 capacity = 5;
  // Attributes of resource.
  map<string, string> attributes = 6;
  // AllowedActions that can be performed.
  repeated string allowed_actions = 7;
}

enum ResourceState {
  ALLOCATED = 0;
  AVAILABLE = 1;
}

message ResourceInstance {
  // ID unique identifier assigned to this resource instance.
  string id = 1;
  // Version
  int64 version = 2;
  // ResourceID of the resource.
  string resource_id = 3;
  // Namespace for resource.
  string namespace = 4;
  // Principal that is using the resource.
  string principal_id = 5;
  // state of resource instance.
  ResourceState state = 6;
  // Time duration in milliseconds after which instance will expire.
  google.protobuf.Duration expiry = 7;
}

Permission

The Permission defines access policies for a resource including dynamic conditions based on GO Templates that are evaluated before granting an access:

enum Effect {
  PERMITTED = 0;
  DENIED = 1;
}

message Permission {
  // ID unique identifier assigned to this permission.
  string id = 1;
  // Version
  int64 version = 2;
  // Namespace for permission.
  string namespace = 3;
  // Scope for permission.
  string scope = 4;
  // Actions that can be performed.
  repeated string actions = 5;
  // Resource for the action.
  string resource_id = 6;
  // Effect Permitted or Denied
  Effect effect = 7;
  // Constraints expression with dynamic properties.
  string constraints = 8;
}

Role

A Principal can be associated with one or more Roles where each Role has a name and can be optionally associated with Permissions for implementing RBAC based access control, e.g.,

message Role {
  // ID unique identifier assigned to this role.
  string id = 1;
  // Version
  int64 version = 2;
  // Namespace for permission.
  string namespace = 3;
  // Name of the role.
  string name = 4;
  // PermissionIDs that can be performed.
  repeated string permission_ids = 5;
  // Optional parent ids
  repeated string parent_ids = 6;
}

A Role can also be inherited from multiple other Roles so that common Permissions can be defined in the parent Role(s) and specific Permissions are defined in the derived Roles.

Group

A Principal can be linked to multiple Groups, and each Group can be tied to several Roles. The Principal inherits access Permissions not only directly associated with it but also from the Roles it’s part of and the Groups it’s connected to. Here is the Group definition:

message Group {
  // ID unique identifier assigned to this group.
  string id = 1;
  // Version
  int64 version = 2;
  // Namespace for permission.
  string namespace = 3;
  // Name of the group.
  string name = 4;
  // RoleIDs that are associated.
  repeated string role_ids = 5;
  // Optional parent ids.
  repeated string parent_ids = 6;
}

A Group can also have one or parents similar to Roles so that access rules policies can check membership for groups or inherits all permissions that belong to a Group through its association with Roles.

Relationship

A Principal can define relationships with resources or target objects for performing actions and access policies can check for existence of a relationship before permitting an action and implementing ReBAC based policies. Though, Relationship seems similar to a Role or a Group but it differs from them because a Relationship directly associate between a Principal and a Resource where as a Role can be associated with multiple Principals and is indirectly associated with Resource through Permission object. Here is the definition for a Relationship:

message Relationship {
  // ID unique identifier assigned to this relationship.
  // in:body
  string id = 1;

  // Version
  // in:body
  int64 version = 2;

  // Namespace for permission.
  // in:body
  string namespace = 3;

  // Relation name.
  // in:body
  string relation = 4;

  // PrincipalID for relationship.
  // in:body
  string principal_id = 5;

  // ResourceID for relationship.
  // in:body
  string resource_id = 6;

  // Attributes of relationship.
  // in:body
  map<string, string> attributes = 7;
}

API Specifications for Authorization

The Authorization APIs are grouped into control-plane APIs for managing above data and their relationships with Principals and data-plane (behavioral) for Authorizing decisions. Following section defines control-plane APIs in Protocol Buffers definition language for managing authorization data and policies:

Control-Plane APIs for managing Organizations

service OrganizationsService {
  // Create Organizations swagger:route POST /api/v1/organizations organizations createOrganizationRequest
  // Responses:
  // 200: createOrganizationResponse
  rpc Create (CreateOrganizationRequest) returns (CreateOrganizationResponse);

  // Update Organizations swagger:route PUT /api/v1/organizations/{id} organizations updateOrganizationRequest
  // Responses:
  // 200: updateOrganizationResponse
  rpc Update (UpdateOrganizationRequest) returns (UpdateOrganizationResponse);

  // Get Organization swagger:route GET /api/v1/organizations/{id} organizations getOrganizationRequest
  // Responses:
  // 200: getOrganizationResponse
  rpc Get (GetOrganizationRequest) returns (GetOrganizationResponse);

  // Query Organization swagger:route GET /api/v1/organizations organizations queryOrganizationRequest
  // Responses:
  // 200: queryOrganizationResponse
  rpc Query (QueryOrganizationRequest) returns (stream QueryOrganizationResponse);

  // Delete Organization swagger:route DELETE /api/v1/organizations/{id} organizations deleteOrganizationRequest
  // Responses:
  // 200: deleteOrganizationResponse
  rpc Delete (DeleteOrganizationRequest) returns (DeleteOrganizationResponse);
}

Above definition also defines OpenAPI specification for REST based APIs so that the same behavior can be used by either gRPC or REST API protocols.

Control-Plane APIs for managing Principals

Following specification defines APIs to manage Principals and add/remove the associations with Roles, Groups, Permissions and Relationships:

service PrincipalsService {
  // Create Principals swagger:route POST /api/v1/{organization_id}/principals principals createPrincipalRequest
  // Responses:
  // 200: createPrincipalResponse
  rpc Create (CreatePrincipalRequest) returns (CreatePrincipalResponse);

  // Update Principals swagger:route PUT /api/v1/{organization_id}/principals/{id} principals updatePrincipalRequest
  // Responses:
  // 200: updatePrincipalResponse
  rpc Update (UpdatePrincipalRequest) returns (UpdatePrincipalResponse);

  // Get Principal swagger:route GET /api/v1/{organization_id}/{namespace}/principals/{id} principals getPrincipalRequest
  // Responses:
  // 200: getPrincipalResponse
  rpc Get (GetPrincipalRequest) returns (GetPrincipalResponse);

  // Query Principal swagger:route GET /api/v1/{organization_id}/principals principals queryPrincipalRequest
  // Responses:
  // 200: queryPrincipalResponse
  rpc Query (QueryPrincipalRequest) returns (stream QueryPrincipalResponse);

  // Delete Principal swagger:route DELETE /api/v1/{organization_id}/principals/{id} principals deletePrincipalRequest
  // Responses:
  // 200: deletePrincipalResponse
  rpc Delete (DeletePrincipalRequest) returns (DeletePrincipalResponse);

  // AddGroups Principal swagger:route PUT /api/v1/{organization_id}/{namespace}/principals/{id}/groups/add principals addGroupsToPrincipalRequest
  // Responses:
  // 200: addGroupsToPrincipalResponse
  rpc AddGroups (AddGroupsToPrincipalRequest) returns (AddGroupsToPrincipalResponse);

  // DeleteGroups Principal swagger:route PUT /api/v1/{organization_id}/{namespace}/principals/{id}/groups/delete principals deleteGroupsToPrincipalRequest
  // Responses:
  // 200: deleteGroupsToPrincipalResponse
  rpc DeleteGroups (DeleteGroupsToPrincipalRequest) returns (DeleteGroupsToPrincipalResponse);

  // AddRoles Principal swagger:route PUT /api/v1/{organization_id}/{namespace}/principals/{id}/roles/add principals addRolesToPrincipalRequest
  // Responses:
  // 200: addRolesToPrincipalResponse
  rpc AddRoles (AddRolesToPrincipalRequest) returns (AddRolesToPrincipalResponse);

  // DeleteRole Principal swagger:route PUT /api/v1/{organization_id}/{namespace}/principals/{id}/roles/delete principals deleteRolesToPrincipalRequest
  // Responses:
  rpc DeleteRoles (DeleteRolesToPrincipalRequest) returns (DeleteRolesToPrincipalResponse);

  // AddPermissions Principal swagger:route PUT /api/v1/{organization_id}/{namespace}/principals/{id}/permissions/add principals addPermissionsToPrincipalRequest
  // Responses:
  // 200: addPermissionsToPrincipalResponse
  rpc AddPermissions (AddPermissionsToPrincipalRequest) returns (AddPermissionsToPrincipalResponse);

  // DeletePermissions Principal swagger:route PUT /api/v1/{organization_id}/{namespace}/principals/{id}/permissions/delete principals deletePermissionsToPrincipalRequest
  // Responses:
  // 200: deletePermissionsToPrincipalResponse
  rpc DeletePermissions (DeletePermissionsToPrincipalRequest) returns (DeletePermissionsToPrincipalResponse);

  // AddRelationships Principal swagger:route PUT /api/v1/{organization_id}/{namespace}/principals/{id}/relations/add principals addRelationshipsToPrincipalRequest
  // Responses:
  rpc AddRelationships (AddRelationshipsToPrincipalRequest) returns (AddRelationshipsToPrincipalResponse);

  // DeleteRelationships Principal swagger:route PUT /api/v1/{organization_id}/{namespace}/principals/{id}/relations/delete principals deleteRelationshipsToPrincipalRequest
  // Responses:
  // 200: deleteRelationshipsToPrincipalResponse
  rpc DeleteRelationships (DeleteRelationshipsToPrincipalRequest) returns (DeleteRelationshipsToPrincipalResponse);
}

Above definition defines OpenAPI specification for REST based APIs as well for providing groups management API using gRPC or REST API protocols.

Control-Plane APIs for managing Resources

service ResourcesService {
  // Create Resources swagger:route POST /api/v1/{organization_id}/{namespace}/resources resources createResourceRequest
  // Responses:
  // 200: createResourceResponse
  rpc Create (CreateResourceRequest) returns (CreateResourceResponse);

  // Update Resources swagger:route PUT /api/v1/{organization_id}/{namespace}/resources/{id} resources updateResourceRequest
  // Responses:
  // 200: updateResourceResponse
  rpc Update (UpdateResourceRequest) returns (UpdateResourceResponse);

  // Query Resource swagger:route GET /api/v1/{organization_id}/{namespace}/resources resources queryResourceRequest
  // Responses:
  // 200: queryResourceResponse
  rpc Query (QueryResourceRequest) returns (stream QueryResourceResponse);

  // Delete Resource swagger:route DELETE /api/v1/{organization_id}/{namespace}/resources/{id} resources deleteResourceRequest
  // Responses:
  // 200: deleteResourceResponse
  rpc Delete (DeleteResourceRequest) returns (DeleteResourceResponse);

  // CountResourceInstances Resources swagger:route GET /api/v1/{organization_id}/{namespace}/resources/{id}/instance_count resources countResourceInstancesRequest
  // Responses:
  // 200: countResourceInstancesResponse
  rpc CountResourceInstances (CountResourceInstancesRequest) returns (CountResourceInstancesResponse);

  // QueryResourceInstances Resources swagger:route GET /api/v1/{organization_id}/{namespace}/resources/{id}/instances resources queryResourceInstanceRequest
  // Responses:
  // 200: queryResourceInstanceResponse
  rpc QueryResourceInstances (QueryResourceInstanceRequest) returns (stream QueryResourceInstanceResponse);
}

Control-Plane APIs for managing Groups

service GroupsService {
  // Create Groups swagger:route POST /api/v1/{organization_id}/{namespace}/groups groups updateGroupRequest
  // Responses:
  // 200: updateGroupResponse
  rpc Create (CreateGroupRequest) returns (CreateGroupResponse);

  // Update Groups swagger:route PUT /api/v1/{organization_id}/{namespace}/groups groups/{id} updateGroupRequest
  // Responses:
  // 200: updateGroupResponse
  rpc Update (UpdateGroupRequest) returns (UpdateGroupResponse);

  // Query Group swagger:route GET /api/v1/{organization_id}/{namespace}/groups groups queryGroupRequest
  // Responses:
  // 200: queryGroupResponse
  rpc Query (QueryGroupRequest) returns (stream QueryGroupResponse);

  // Delete Group swagger:route DELETE /api/v1/{organization_id}/{namespace}/groups/{id} groups deleteGroupRequest
  // Responses:
  // 200: deleteGroupResponse
  rpc Delete (DeleteGroupRequest) returns (DeleteGroupResponse);

  // AddRoles Group swagger:route PUT /api/v1/{organization_id}/{namespace}/groups/{id}/roles/add groups addRolesToGroupRequest
  // Responses:
  // 200: addRolesToGroupResponse
  rpc AddRoles (AddRolesToGroupRequest) returns (AddRolesToGroupResponse);

  // DeleteRoles Group swagger:route PUT /api/v1/{organization_id}/{namespace}/groups/{id}/roles/delete groups deleteRolesToGroupRequest
  // Responses:
  // 200: deleteRolesToGroupResponse
  rpc DeleteRoles (DeleteRolesToGroupRequest) returns (DeleteRolesToGroupResponse);
}

Control-Plane APIs for managing Roles

service RolesService {
  // Create Roles swagger:route POST /api/v1/{organization_id}/{namespace}/roles roles createRoleRequest
  // Responses:
  // 200: createRoleResponse
  rpc Create (CreateRoleRequest) returns (CreateRoleResponse);

  // Update Roles swagger:route PUT /api/v1/{organization_id}/{namespace}/roles/{id} roles updateRoleRequest
  // Responses:
  // 200: updateRoleResponse
  rpc Update (UpdateRoleRequest) returns (UpdateRoleResponse);

  // Query Role swagger:route GET /api/v1/{organization_id}/{namespace}/roles roles queryRoleRequest
  // Responses:
  // 200: queryRoleResponse
  rpc Query (QueryRoleRequest) returns (stream QueryRoleResponse);

  // Delete Role swagger:route DELETE /api/v1/{organization_id}/{namespace}/roles/{id} roles deleteRoleRequest
  // Responses:
  // 200: deleteRoleResponse
  rpc Delete (DeleteRoleRequest) returns (DeleteRoleResponse);

  // AddPermissions Role swagger:route PUT /api/v1/{organization_id}/{namespace}/roles/{id}/permissions/add roles addPermissionsToRoleRequest
  // Responses:
  // 200: addPermissionsToRoleResponse
  rpc AddPermissions (AddPermissionsToRoleRequest) returns (AddPermissionsToRoleResponse);

  // DeletePermissions Role swagger:route PUT /api/v1/{organization_id}/{namespace}/roles/{id}/permissions/delete roles deletePermissionsToRoleRequest
  // Responses:
  // 200: deletePermissionsToRoleResponse
  rpc DeletePermissions (DeletePermissionsToRoleRequest) returns (DeletePermissionsToRoleResponse);
}

Control-Plane APIs for managing Permissions

service PermissionsService {
  // Create Permissions swagger:route POST /api/v1/{organization_id}/{namespace}/permissions permissions createPermissionRequest
  // Responses:
  // 200: createPermissionResponse
  rpc Create (CreatePermissionRequest) returns (CreatePermissionResponse);

  // Update Permissions swagger:route PUT /api/v1/{organization_id}/{namespace}/permissions/{id} permissions updatePermissionRequest
  // Responses:
  // 200: updatePermissionResponse
  rpc Update (UpdatePermissionRequest) returns (UpdatePermissionResponse);

  // Query Permission swagger:route GET /api/v1/{organization_id}/{namespace}/permissions permissions queryPermissionRequest
  // Responses:
  // 200: queryPermissionResponse
  rpc Query (QueryPermissionRequest) returns (stream QueryPermissionResponse);

  // Delete Permission swagger:route DELETE /api/v1/{organization_id}/{namespace}/permissions/{id} permissions deletePermissionRequest
  // Responses:
  // 200: deletePermissionResponse
  rpc Delete (DeletePermissionRequest) returns (DeletePermissionResponse);
}

Control-Plane APIs for managing Relationships

service RelationshipsService {
  // Create Relationships swagger:route POST /api/v1/{organization_id}/{namespace}/relations relationships createRelationshipRequest
  // Responses:
  // 200: createRelationshipResponse
  rpc Create (CreateRelationshipRequest) returns (CreateRelationshipResponse);

  // Update Relationships swagger:route PUT /api/v1/{organization_id}/{namespace}/relations/{id} relationships updateRelationshipRequest
  // Responses:
  // 200: updateRelationshipResponse
  rpc Update (UpdateRelationshipRequest) returns (UpdateRelationshipResponse);

  // Query Relationship swagger:route GET /api/v1/{organization_id}/{namespace}/relations relationships queryRelationshipRequest
  // Responses:
  // 200: queryRelationshipResponse
  rpc Query (QueryRelationshipRequest) returns (stream QueryRelationshipResponse);

  // Delete Relationship swagger:route DELETE /api/v1/{organization_id}/{namespace}/relations/{id} relationships deleteRelationshipRequest
  // Responses:
  // 200: deleteRelationshipResponse
  rpc Delete (DeleteRelationshipRequest) returns (DeleteRelationshipResponse);
}

Data-Plane APIs for Authorization

Following specification defines APIs for authorizing access to resources based on permissions and constraints as well operations to allocate and deallocate resources:

service AuthZService {
  // Authorize swagger:route POST /api/v1/{organization_id}/{namespace}/{principal_id}/auth authz authRequest
  // Responses:
  // 200: authResponse
  rpc Authorize (AuthRequest) returns (AuthResponse);

  // Check swagger:route POST /api/v1/{organization_id}/{namespace}/{principal_id}/auth/constraints authz checkConstraintsRequest
  // Responses:
  // 200: checkConstraintsResponse
  rpc Check (CheckConstraintsRequest) returns (CheckConstraintsResponse);

  // Allocate Resources swagger:route PUT /api/v1/{organization_id}/{namespace}/resources/{id}/allocate/{principal_id} resources allocateResourceRequest
  // Responses:
  // 200: allocateResourceResponse
  rpc Allocate (AllocateResourceRequest) returns (AllocateResourceResponse);

  // Deallocate Resources swagger:route PUT /api/v1/{organization_id}/{namespace}/resources/{id}/deallocate/{principal_id} resources deallocateResourceRequest
  // Responses:
  // 200: deallocateResourceResponse
  rpc Deallocate (DeallocateResourceRequest) returns (DeallocateResourceResponse);
}

Authorize API

The Authorize API takes AuthRequest as a request that defines Principal-Id, Resource-Name, Action and context attributes and checks permissions for granting access:

message AuthRequest {
  string organization_id = 1;
  string namespace = 2;
  string principal_id = 3;
  string action = 4;
  string resource = 5;
  string scope = 6;
  map<string, string> context = 7;
}
message AuthResponse {
  api.authz.types.Effect effect = 1;
  string message = 2;
}

Check Constraints API

The Check API allows evaluating dynamic conditions based on GO Templates without defining Permissions so that you can check for the membership to a group, a role, an existence of a relationship or other dynamic properties.

message CheckConstraintsRequest {
  string organization_id = 1;
  string namespace = 2;
  string principal_id = 3;
  string constraints = 4;
  map<string, string> context = 5;
}
message CheckConstraintsResponse {
  bool matched = 1;
  string output = 2;
}

Allocate and Deallocate Resources APIs

The Allocate and Deallocate APIs can be used to manage resources that can be assigned based on a quota or a maximum capacity, e.g.:

message AllocateResourceRequest {
  string organization_id = 1;
  string namespace = 2;
  string resource_id = 3;
  string principal_id = 4;
  string constraints = 5;
  google.protobuf.Duration expiry = 6;
  map<string, string> context = 7;
}

Implementation

Above hybrid authorization APIs is implemented in GO and is available freely from https://github.com/bhatti/PlexAuthZ. The following diagram illustrates structure of modules for the implementing various parts of the Authorization system:

Following are major components in above diagram:

API Layer

The API layer defines service interfaces and schema for domain model as well request/response objects. The interfaces are then implemented by gRPC servers and REST controllers.

Data Layer and Repositories

The Data layer defines interfaces for storing data in Redis or DynamoDB databases. The Repository layer defines interfaces for managing data for each type such as Principal, Organization and Resource.

Domain Services

The Domain services abstract over Repository layer and implements referential integrity between data objects and validation logic before persisting authorization data.

Authorizer

The Authorizer layer defines interfaces for Authorization decisions. The API layer implements the interface based on Casbin for communicating clients and servers. This layer defines a default implementation based on the Domain service layer for enforcing authorization decisions based on above APIs.

Factory and Configuration

The PlexAuthZ makes extensive use of interfaces with different implementations for Datastore, Repositories, Authorizer and AuthAdapter. The user can choose different implementations based on the Configuration, which are passed to the factory methods when instantiating objects that implement those interfaces.

AuthAdapter

The AuthAdapter abstracts Data services and Authorizer for interacting with underlying Authorization system. AuthAdapter defines a simplified DSL in GO language that understands the relationships between data objects. The users can instantiate AuthAdapter that can connect to remote gRPC server, REST controller, or the database directly.

Usage Examples

In above data model and APIs, Principals, Resources and Relationships can have arbitrary attributes that can be checked at runtime for enforcing policies based on attributes. In addition, the request objects for Authorize, Check and AllocateResource defines runtime context properties that can be passed along with other attributes when evaluating runtime conditions based on GO Templates. Following section defines use-cases for enforcing access policies based on ABAC, RBAC, ReBAC and PBAC:

GO Client Initialization

First, the GO client library will be setup with a selection of the implementation based on the database, gRPC client or REST API client, e.g.,

cfg, err := domain.NewConfig("") // omitting error handling here
// config defines mode for access by database, gRPC or REST APIs.
authService, _, err := factory.CreateAuthAdminService(cfg)
authorizer, err := authz.CreateAuthorizer(authz.DefaultAuthorizerKind, cfg, authSvc)

authAdapter := client.New(authorizer, authService)
orgAdapter, err := authAdapter.CreateOrganization(
		&types.Organization{
			Name:       "xyz-corp",
			Namespaces: []string{"marketing", "sales"},
		})
namespace := orgAdapter.Organization.Namespaces[0]

Attributes based Access Policies

Following example illustrates implementing attribute-based access policies where three Principals (alice, bob, charlie) will define attributes for Department and Rank:

alice, err := orgAdapter.Principals().WithUsername("alice").
		WithAttributes("Department", "Engineering", "Rank", "5").Create()
bob, err := orgAdapter.Principals().WithUsername("bob").
		WithAttributes("Department", "Engineering", "Rank", "6").Create()
charlie, err := orgAdapter.Principals().WithUsername("charlie").
		WithAttributes("Department", "Sales", "Rank", "6").Create()

Then a resource for an ios-app and permissions will be defined as follows:

app, err := orgAdapter.Resources(namespace).WithName("ios-app").
		WithAttributes("Editors", "alice bob").
		WithActions("list", "read", "write", "create", "delete").Create()

rlPerm1, err := orgAdapter.Permissions(namespace).WithResource(app.Resource).
		WithConstraints(`
	{{or (Includes .Resource.Editors .Principal.Username) (GE .Principal.Rank 6)}}
`).WithActions("read", "list").Create()

wPerm2, err := orgAdapter.Permissions(namespace).WithResource(app.Resource).
		WithConstraints(`
	{{and (Includes .Resource.Editors .Principal.Username) (GE .Principal.Rank 6)}}
`).WithActions("write").Create()

// assigning permission to all principals
alice.AddPermissions(rlPerm1.Permission, wPerm2.Permission)
bob.AddPermissions(rlPerm1.Permission, wPerm2.Permission))
charlie.AddPermissions(rlPerm1.Permission, wPerm2.Permission)

The attributes based access permissions will be checked as follows:

// Alice, Bob, Charlie should be able to read/list since alice/bob belong to 
// Editors attribute and Charlie's rank >= 6
require.NoError(t, alice.Authorizer(namespace).WithAction("list").
                WithResourceName("ios-app").Check())
require.NoError(t, bob.Authorizer(namespace).WithAction("list").
		WithResourceName("ios-app").Check())
require.NoError(t, charlie.Authorizer(namespace).WithAction("list").
		WithResourceName("ios-app").Check())

// Only Bob should be able to write because Alice's rank is lower than 6 and 
// Charlie doesn't belongto Editors attribute.
require.Error(t, alice.Authorizer(namespace).WithAction("write").
		WithResourceName("ios-app").Check())
require.NoError(t, bob.Authorizer(namespace).WithAction("write").
		WithResourceName("ios-app").Check())
require.Error(t, charlie.Authorizer(namespace).WithAction("write").
		WithResourceName("ios-app").Check())

Note: The Authorization adapter defines Check method that will invoke the Authorize or Check method of the data-plane Authorization API based on parameters.

Runtime Attributes based on IPAddresses

The GO Templates allow defining custom functions and PlexAuthz implementation includes a number of helper functions to validate IP addresses, Geolocation, Time and other environment factors, e.g.,

rwlPerm, err := orgAdapter.Permissions(namespace).
		WithResource(app.Resource).
		WithConstraints(`
{{$Loopback := IsLoopback .IPAddress}}
{{$Multicast := IsMulticast .IPAddress}}
{{and (not $Loopback) (not $Multicast) (IPInRange .IPAddress "211.211.211.0/24")}}
`).WithActions("read", "write", "list").Create()
alice.AddPermissions(rwlPerm.Permission)

// The app should be only be accessible if ip-address is not loop-back,
// not multi-cast and within ip-range
require.NoError(t, alice.Authorizer(namespace).WithAction("list").
		WithContext("IPAddress", "211.211.211.5").WithResourceName("ios-app").Check())
// But not local ipaddress or multicast
require.Error(t, alice.Authorizer(namespace).WithAction("list").
		WithContext("IPAddress", "127.0.0.1").WithResourceName("ios-app").Check())
require.Error(t, alice.Authorizer(namespace).WithAction("list").
		WithContext("IPAddress", "224.0.0.1").WithResourceName("ios-app").Check())

RBAC Scenario

The following example will assign roles and groups to Principal objects and then enforce membership before granting the access:

teller, err := orgAdapter.Roles(namespace).WithName("Teller").Create()
manager, err := orgAdapter.Roles(namespace).WithName("Manager").WithParents(teller.Role).Create()
loanOfficer, err := orgAdapter.Roles(namespace).WithName("LoanOfficer").Create()
support, err := orgAdapter.Roles(namespace).WithName("ITSupport").Create()

// Assigning roles
alice.AddRoles(manager.Role)
bob.AddRoles(loanOfficer.Role)
charlie.AddRoles(support.Role)

sales, err := orgAdapter.Groups(namespace).WithName("Sales").Create()
accounting, err := orgAdapter.Groups(namespace).WithName("Accounting").Create()
engineering, err := orgAdapter.Groups(namespace).WithName("Engineering").Create()

// Assigning groups
alice.AddGroups(sales.Group)
bob.AddGroups(accounting.Group)
charlie.AddGroups(engineering.Group)

Following snippet illustrates enforcement of roles and groups membership:

require.NoError(t, alice.Authorizer(namespace).WithConstraints(
`{{and (HasRole "Teller") (HasGroup "Sales") (TimeInRange .CurrentTime .StartTime .EndTime)}}`).
WithContext("CurrentTime", "10:00am", "StartTime", "8:00am", "EndTime", "4:00pm").Check())

require.NoError(t, bob.Authorizer(namespace).WithConstraints(
`{{and (HasRole "LoanOfficer") (HasGroup "Accounting") (TimeInRange .CurrentTime .StartTime .EndTime) (GT .Principal.EmploymentLength 1)}}`).
WithContext("CurrentTime", "10:00am", "StartTime", "8:00am", "EndTime", "4:00pm").Check())

require.NoError(t, charlie.Authorizer(namespace).WithConstraints(`
{{and (HasRole "ITSupport") (HasGroup "Engineering") (TimeInRange .CurrentTime .StartTime .EndTime) (GT .Principal.EmploymentLength 1)}}`).
WithContext("CurrentTime", "10:00am", "StartTime", "8:00am", "EndTime", "4:00pm").Check())

// but should fail for bob because ITSupport Role and Engineering Group is required.
require.Error(t, bob.Authorizer(namespace).
WithConstraints(
`{{and (HasRole "ITSupport") (HasGroup "Engineering") (TimeInRange .CurrentTime .StartTime .EndTime) (GT .Principal.EmploymentLength 1)}}`).
WithContext("CurrentTime", "10:00am", "StartTime", "8:00am", "EndTime", "4:00pm").Check())

Note: The Authorizer adapter will invoke Check API in above use-cases because it’s only using constraints without defining permissions.

ReBAC Scenario

Though, ReBAC systems generally define relationships between actors but you can consider a Principal as a subject-actor and a Resource as a target-actor for relationships. Following scenarios illustrates how relationships between Principal and Resources can be used to enforce ReBAC based access policies similar to Zanzibar:

smith, err := orgAdapter.Principals().WithUsername("smith").
		WithAttributes("UserRole", "Doctor").Create()
john, err := orgAdapter.Principals().WithUsername("john").
		WithAttributes("UserRole", "Patient").Create()

medicalRecords, err := orgAdapter.Resources(namespace).WithName("MedicalRecords").
		WithAttributes("Year", fmt.Sprintf("%d", time.Now().Year()), "Location", "Hospital").
		WithActions("read", "write", "create", "delete").Create()

docRelation, err := smith.Relationships(namespace).WithRelation("AsDoctor").
		WithResource(medicalRecords.Resource).
		WithAttributes("Location", "Hospital").Create()

patientRelation, err := john.Relationships(namespace).WithRelation("AsPatient").
		WithResource(medicalRecords.Resource).Create()

rwPerm, err := orgAdapter.Permissions(namespace).WithResource(medicalRecords.Resource).
		WithConstraints(`
{{$CurrentYear := TimeNow "2006"}}
{{and (HasRelation "AsDoctor") (DistanceWithinKM .UserLatLng "46.879967,-121.726906" 100) 
(eq .Resource.Year $CurrentYear) (eq .Resource.Location .Location)}}
`).WithActions("read", "write").Create()

rPerm, err := orgAdapter.Permissions(namespace).WithResource(medicalRecords.Resource).
		WithScope("john's records").
		WithConstraints(`
{{$CurrentYear := TimeNow "2006"}}
{{and (HasRelation "AsPatient") (eq .Resource.Year $CurrentYear) (eq .Resource.Location .Location)}}
`).WithActions("read").Create()

smith.AddPermissions(rwPerm.Permission)
john.AddPermissions(rPerm.Permission)

Above snippet defines medical-records as a resource, and Principals for smith and john where smith is assigned a relationship for AsDoctor and john is assigned a relationship for AsPatient. The permissions for reading or writing medical records enforce the AsDoctor relationship and permissions for reading medical records enforce the AsPatient relationship. Then enforcing relationships is defined as follows:

// Dr. Smith should have permission for reading/writing medical records based on constraints
require.NoError(t, smith.Authorizer(namespace).WithAction("write").
		WithResource(medicalRecords.Resource).
		WithContext("UserLatLng", "47.620422,-122.349358", "Location", "Hospital").Check())

// Patient john should have permission for reading medical records based on constraints
require.NoError(t, john.Authorizer(namespace).WithAction("read").
		WithScope("john's records").WithResource(medicalRecords.Resource).
		WithContext("Location", "Hospital").Check())

// But Patient john should not write medical records
require.Error(t, john.Authorizer(namespace).WithAction("write").
		WithResource(medicalRecords.Resource).
		WithContext("Location", "Hospital").Check())

Above Snippet also makes use of other functions available in the Template language for enforcing dynamic conditions based on Geofencing that permits access only when the doctor is close to the Hospital.

As the Relationships are defined between actors, we can also define a Resource to represent a Doctor and a Principal for the patient so that a patient-doctor relationship can be established, e.g.,

// Now treating Doctor as Target Resource for appointment
doctorResource, err := orgAdapter.Resources(namespace).WithName(smith.Principal.Name).
		WithAttributes("Year", fmt.Sprintf("%d", time.Now().Year()),
			"Location", "Hospital").WithActions("appointment", "consult").Create()

doctorPatientRelation, err := john.Relationships(namespace).WithRelation("Physician").
		WithAttributes("StartTime", "8:00am", "EndTime", "4:00pm").
		WithResource(doctorResource.Resource).Create()

apptPerm, err := orgAdapter.Permissions(namespace).WithResource(doctorResource.Resource).
		WithConstraints(`
{{$CurrentYear := TimeNow "2006"}}
{{and (TimeInRange .AppointmentTime .Relations.Physician.StartTime .Relations.Physician.EndTime) 
(HasRelation "Physician") (eq "Patient" .Principal.UserRole) (eq .Resource.Year $CurrentYear) (eq .Resource.Location .Location)}}
`).WithActions("appointment").Create()
john.AddPermissions(apptPerm.Permission)

// Patient john should be able to make appointment within normal Hopspital hours
require.NoError(t, john.Authorizer(namespace).WithAction("appointment").
		WithResource(doctorResource.Resource).
		WithContext("Location", "Hospital", "AppointmentTime", "10:00am").Check())

Above example shows how authorization rules can also limit access between the normal hours of appointments.

Resources with Quota

PlexAuthZ supports defining access policies for resources that have quota, e.g., an organization may have a fixed set of IDE Licenses to be used by the engineering team or might be using a utility based computing resources with a daily budget. Here is an example scenario:

engGroup, err := orgAdapter.Groups(namespace).WithName("Engineering").Create()

alice, err := orgAdapter.Principals().WithUsername("alice").
		WithAttributes("Title", "Engineer", "Tenure", "3").Create()

// Assigning groups
alice.AddGroups(engGroup.Group)

// AND with following resources
ideLicences, err := orgAdapter.Resources(namespace).WithName("IDELicence").
		WithCapacity(5).WithAttributes("Location", "Chicago").
		WithActions("use").Create()

require.NoError(t, ideLicences.
	WithConstraints(`and (GT .Principal.Tenure 1) (HasGroup "Engineering") (eq .Resource.Location .Location)`).
	WithExpiration(time.Hour).WithContext("Location", "Chicago").Allocate(bob.Principal))
...
// Deallocate after use
require.NoError(t, ideLicences.Deallocate(alice.Principal))

Above example demonstrates that the IDE License can only be allocated if the Principal is member of Engineering group, has a tenure of more than a year and Location matches Resource Location. In addition, the resource can be allocated only for a fixed duration and is automatically deallocated if not allocated explicitly. Both Redis and Dynamo DB supports TTL parameters for expiring data so no application logic is required to expire them.

Resources with Wildcard in the name

PlexAuthZ supports resources with wildcards in the name so that a user can match permissions for all resources that match the wildcard pattern. Here is an example:

alice, err := orgAdapter.Principals().WithUsername("alice").
		WithAttributes("Department", "Sales", "Rank", "6").Create()
bob, err := orgAdapter.Principals().WithUsername("bob").
		WithAttributes("Department", "Engineering", "Rank", "6").Create()

// Creating a project with wildcard
salesProject, err := orgAdapter.Resources(namespace).
		WithName("urn:org-sales-*-project-1000-*").
		WithAttributes("SalesYear", fmt.Sprintf("%d", time.Now().Year())).
		WithActions("read", "write").Create()

rwlPerm, err := orgAdapter.Permissions(namespace).
		WithResource(salesProject.Resource).
		WithEffect(types.Effect_PERMITTED).
		WithConstraints(`
	{{$CurrentYear := TimeNow "2006"}}
	{{and (GT .Principal.Rank 5) (eq .Principal.Department "Sales") (IPInRange .IPAddress "211.211.211.0/24") (eq .Resource.SalesYear $CurrentYear)}}
`).WithActions("*").Create()
	require.NoError(t, err)

alice.AddPermissions(rwlPerm.Permission))
bob.AddPermissions(rwlPerm1.Permission)

// Alice should be able to access from Sales Department and complete project name
require.NoError(t, alice.Authorizer(namespace).WithAction("read").
		WithResourceName("urn:org-sales-abc-project-1000-xyz").
		WithContext("IPAddress", "211.211.211.5").Check())
// But bob should not be able to access project because he doesn't belong to the Sales Department
require.Error(t, bob.Authorizer(namespace).WithAction("read").
		WithResourceName("urn:org-sales-abc-project-1000-xyz").
		WithContext("IPAddress", "211.211.211.5").Check())

Note: The project name “urn:org-sales-abc-project-1000-xyz” matches the wildcard in resource name and permissions also verify attributes of the Resource and Principal.

Permissions with Scope

PlexAuthZ allows associating permissions with specific Scope and the permission is only granted if the scope in authorization request at runtime matches the scope, e.g.,

alice, err := orgAdapter.Principals().WithUsername("alice").
		WithAttributes("Department", "Engineering", "Permanent", "true").Create()
bob, err := orgAdapter.Principals().WithUsername("bob").
		WithAttributes("Department", "Sales", "Permanent", "true").Create()

project, err := orgAdapter.Resources(namespace).WithName("nextgen-app").
		WithAttributes("Owner", "alice").
		WithActions("list", "read", "write", "create", "delete").Create()

rwlPerm, err := orgAdapter.Permissions(namespace).WithResource(project.Resource).
		WithScope("Reporting").
		WithConstraints(`
	{{or (eq .Principal.Username .Resource.Owner) (Not .Private)}}
`).WithActions("read", "write", "list").Create()

alice.AddPermissions(rwlPerm.Permission)
bob.AddPermissions(rwlPerm.Permission)

// Project should be only be accessible by alice as the scope matches and she is the owner.
require.NoError(t, alice.Authorizer(namespace).
		WithAction("list").WithScope("Reporting").
		WithContext("Private", "true").
		WithResourceName("nextgen-app").Check())
// But alice should not be able to access without matching scope.
require.Error(t, alice.Authorizer(namespace).
		WithAction("list").WithScope("").
		WithContext("Private", "true").
		WithResourceName("nextgen-app").Check())

// But bob should not be able to access as project is private and he is not the owner.
	require.Error(t, bob.Authorizer(namespace).
		WithAction("list").WithScope("Reporting").
		WithContext("Private", "true").
		WithResourceName("nextgen-app").Check())

Note: Above example also demonstrates show you can enforce ownership for private resources.

Summary

PlexAuthZ demonstrates how a hybrid Authorization system can support various forms of access policies based on on ABAC, RBAC, ReBAC and PBAC. It’s still early in development but it’s an open-source project that you can try freely from https://github.com/bhatti/PlexAuthZ.

August 28, 2023

Mitigate Production Risks with Phased Deployment

Filed under: Computing,Microservices — admin @ 6:08 pm

Phased deployment is a software deployment strategy where new software features, changes, or updates are gradually released to a subset of a product’s user base rather than to the entire user community at once. The goal is to limit the impact of any potential negative changes and to catch issues before they affect all users. It’s often a part of modern Agile and DevOps practices, allowing teams to validate software in stages—through testing environments, to specific user segments, and finally, to the entire user base. The phased deployment solves following issues with the production changes:

  1. Risk Mitigation: Deploying changes all at once can be risky, especially for large and complex systems. Phase deployment helps to mitigate this risk by gradually releasing the changes and carefully monitoring their impact.
  2. User Experience: With phased deployment, if something goes wrong, it affects only a subset of users. This protects the larger user base from potential issues and negative experiences.
  3. Performance Bottlenecks: By deploying in phases, you can monitor how the system performs under different loads, helping to identify bottlenecks and scaling issues before they impact all users.
  4. Immediate Feedback: Quick feedback loops with stakeholders and users are established. This immediate feedback helps in quick iterations and refinements.
  5. Resource Utilization: Phased deployment allows for better planning and use of resources. You can allocate just the resources you need for each phase, reducing waste.

The phased deployment applies following approaches for detecting production issues early in the deployment process:

  1. Incremental Validation: As each phase is a limited rollout, you can carefully monitor and validate that the software is working as expected. This enables early detection of issues before they become widespread.
  2. Isolation of Issues: If an issue does arise, its impact is restricted to a smaller subset of the system or user base. This makes it easier to isolate the problem, fix it, and then proceed with the deployment.
  3. Rollbacks: In the event of a problem, it’s often easier to rollback changes for a subset of users than for an entire user base. This allows for quick recovery with minimal impact.
  4. Data-driven Decisions: The metrics and logs gathered during each phase can be invaluable for making informed decisions, reducing the guesswork, and thereby reducing errors.
  5. User Feedback: By deploying to a limited user set first, you can collect user feedback that can be crucial for understanding how the changes are affecting user interaction and performance. This provides another opportunity for catching issues before full-scale deployment.
  6. Best Practices and Automation: Phase deployment often incorporates industry best practices like blue/green deployments, canary releases, and feature flags, all of which help in minimizing errors and ensuring a smooth release.

Building CI/CD Process for Phased Deployment

CI/CD with Phased Deployment

Continuous Integration (CI)

Continuous Integration (CI) is a software engineering practice aimed at regularly merging all developers’ working copies of code to a shared mainline or repository, usually multiple times a day. The objective is to catch integration errors as quickly as possible and ensure that code changes by one developer are compatible with code changes made by other developers in the team. The practice defines following steps for integrating developers’ changes:

  1. Code Commit: Developers write code in their local environment, ensuring it meets all coding guidelines and includes necessary unit tests.
  2. Pull Request / Merge Request: When a developer believes their code is ready to be merged, they create a pull request or merge request. This action usually triggers the CI process.
  3. Automated Build and Test: The CI server automatically picks up the new code changes that may be in a feature branch and initiates a build and runs all configured tests.
  4. Code Review: Developers and possibly other stakeholders review the test and build reports. If errors are found, the code is sent back for modification.
  5. Merge: If everything looks good, the changes are merged into main branch of the repository.
  6. Automated Build: After every commit, automated build processes compile the source code, create executables, and run unit/integration/functional tests.
  7. Automated Testing: This stage automatically runs a suite of tests that can include unit tests, integration tests, test coverage and more.
  8. Reporting: Generate and publish reports detailing the success or failure of the build, lint/FindBugs, static analysis (Fortify), dependency analysis, and tests.
  9. Notification: Developers are notified about the build and test status, usually via email, Slack, or through the CI system’s dashboard.
  10. Artifact Repository: Store the build artifacts that pass all the tests for future use.

Above continuous integration process allows immediate feedback on code changes, reduces integration risk, increases confidence, encourages better collaboration and improves code quality.

Continuous Deployment (CD)

The Continuous Deployment (CD) further enhances this by automating the delivery of applications to selected infrastructure environments. Where CI deals with build, testing, and merging code, CD takes the code from CI and deploys it directly into the production environment, making changes that pass all automated tests immediately available to users. The above workflow for Continuous Integration is added with following additional steps:

  1. Code Committed: Once code passes all tests during the CI phase, it moves onto CD.
  2. Pre-Deployment Staging: Code may be deployed to a staging area where it undergoes additional tests that could be too time-consuming or risky to run during CI. The staging environment can be divided into multiple environments such as alpha staging for integration and sanity testing, beta staging for functional and acceptance testing, and gamma staging environment for chaos, security and performance testing.
  3. Performance Bottlenecks: The staging environment may execute security, chaos, shadow and performance tests to identify bottlenecks and scaling issues before deploying code to the production.
  4. Deployment to Production: If the code passes all checks, it’s automatically deployed to production.
  5. Monitoring & Verification: After deployment, automated systems monitor application health and performance. Some systems use Canary Testing to continuously verify that deployed features are behaving as expected.
  6. Rollback if Necessary: If an issue is detected, the CD system can automatically rollback to a previous, stable version of the application.
  7. Feedback Loop: Metrics and logs from the deployed application can be used to inform future development cycles.

The Continuous Deployment process results in faster time-to-market, reduced risk, greater reliability, improved quality, and better efficiency and resource utilization.

Phased Deployment Workflow

Phased Deployment allows rolling out a change in increments rather than deploying it to all servers or users at once. This strategy fits naturally into a Continuous Integration/Continuous Deployment (CI/CD) pipeline and can significantly reduce the risks associated with releasing new software versions. The CI/CD workflow is enhanced as follows:

  1. Code Commit & CI Process: Developers commit code changes, which trigger the CI pipeline for building and initial testing.
  2. Initial Deployment to Dev Environment: After passing CI, the changes are deployed to a development environment for further testing.
  3. Automated Tests and Manual QA: More comprehensive tests are run. This could also include security, chaos, shadow, load and performance tests.
  4. Phase 1 Deployment (Canary Release): Deploy the changes to a small subset of the production environment or users and monitor closely. If you operate in multiple data centers, cellular architecture or geographical regions, consider initiating your deployment in the area with the fewest users to minimize the impact of potential issues. This approach helps in reducing the “blast radius” of any potential problems that may arise during deployment.
  5. PreProd Testing: In the initial phase, you may optionally first deploy to a special pre-prod environment where you only execute canary testing simulating user requests without actually user-traffic with the production infrastructure so that you can further reduce blast radius for impacting customer experience.
  6. Baking Period: To make informed decisions about the efficacy and reliability of your code changes, it’s crucial to have a ‘baking period’ where the new code is monitored and tested. During this time, you’ll gather essential metrics and data that help in confidently determining whether or not to proceed with broader deployments.
  7. Monitoring and Metrics Collection: Use real-time monitoring tools to track system performance, error rates, and other KPIs.
  8. Review and Approval: If everything looks good, approve the changes for the next phase. If issues are found, roll back and diagnose.
  9. Subsequent Phases: Roll out the changes to larger subsets of the production environment or user base, monitoring closely at each phase. The subsequent phases may use a simple static scheme by adding X servers or user-segments at a time or geometric scheme by exponentially doubling the number of servers or user-segments after each phase. For instance, you can employ mathematical formulas like 2^N or 1.5^N, where N represents the phase number, to calculate the scope of the next deployment phase. This could pertain to the number of servers, geographic regions, or user segments that will be included.
  10. Subsequent Baking Periods: As confidence in the code increases through successful earlier phases, the duration of subsequent ‘baking periods’ can be progressively shortened. This allows for an acceleration of the phased deployment process until the changes are rolled out to all regions or user segments.
  11. Final Rollout: After all phases are successfully completed, deploy the changes to all servers and users.
  12. Continuous Monitoring: Even after full deployment, keep running Canary Tests for validation and monitoring to ensure everything is working as expected.

Thus, phase deployment further mitigates risk, improves user experience, monitoring and resource utilization. If a problem is identified, it’s much easier to roll back changes for a subset of users, reducing negative impact.

Criteria for Selecting Targets for Phased Deployment

When choosing targets for phased deployment, you have multiple options, including cells within a Cellular Architecture, distinct Geographical Regions, individual Servers within a data center, or specific User Segments. Here are some key factors to consider while making your selection:

  1. Risk Assessment: The first step in selecting cells, regions, or user-segments is to conduct a thorough risk assessment. The idea is to understand which areas are most sensitive to changes and which are relatively insulated from potential issues.
  2. User Activity: Regions with lower user activity can be ideal candidates for the initial phases, thereby minimizing the impact if something goes wrong.
  3. Technical Constraints: Factors such as server capacity, load balancing, and network latency may also influence the selection process.
  4. Business Importance: Some user-segments or regions may be more business-critical than others. Starting deployment in less critical areas can serve as a safe first step.
  5. Gradual Scale-up: Mathematical formulas like 2^N or 1.5^N where N is the phase number can be used to gradually increase the size of the deployment target in subsequent phases.
  6. Performance Metrics: Utilize performance metrics like latency, error rates, etc., to decide on next steps after each phase.

Always start with the least risky cells, regions, or user-segments in the initial phases and then use metrics and KPIs to gain confidence in the deployed changes. After gaining confidence from initial phases, you may initiate parallel deployments cross multiple environments, perhaps even in multiple regions simultaneously. However, you should ensure that each environment has its independent monitoring to quickly identify and isolate issues. The rollback strategy should be tested ahead of time to ensure it works as expected before parallel deployment. You should keep detailed logs and documentation for each deployment phase and environment.

Cellular Architecture

Phased deployment can work particularly well with a cellular architecture, offering a systematic approach to gradually release new code changes while ensuring system reliability. In cellular architecture, your system is divided into isolated cells, each capable of operating independently. These cells could represent different services, geographic regions, user segments, or even individual instances of a microservices-based application. For example, you can identify which cells will be the first candidates for deployment, typically those with the least user traffic or those deemed least critical.

The deployment process begins by introducing the new code to an initial cell or a small cluster of cells. This initial rollout serves as a pilot phase, during which key performance indicators such as latency, error rates, and other metrics are closely monitored. If the data gathered during this ‘baking period’ indicates issues, a rollback is triggered. If all goes well, the deployment moves on to the next set of cells. Subsequent phases follow the same procedure, gradually extending the deployment to more cells. Utilizing phased deployment within a cellular architecture helps to minimize the impact area of any potential issues, thus facilitating more effective monitoring, troubleshooting, and ultimately a more reliable software release.

Blue/Green Deployment

The phased deployment can employ the Blue/Green deployment strategy where two separate environments, often referred to as “blue” and “green,” are configured. Both are identical in terms of hardware, software, and settings. The Blue environment runs the current version of the application and serves all user traffic. The Green is a clone of the Blue environment where the new version of the application is deployed. This helps phased deployment because one environment is always live, thus allow for releasing new features without downtime. If issues are detected, traffic can be quickly rerouted back to the Blue environment, thus minimizing the risk and impact of new deployments. The Blue/Green deployment includes following steps:

  1. Preparation: Initially, both Blue and Green environments run the current version of the application.
  2. Initial Rollout: Deploy the new application code or changes to the Green environment.
  3. Verification: Perform tests on the Green environment to make sure the new code changes are stable and performant.
  4. Partial Traffic Routing: In a phased manner, start rerouting a small portion of the live traffic to the Green environment. Monitor key performance indicators like latency, error rates, etc.
  5. Monitoring and Decision: If any issues are detected during this phase, roll back the traffic to the Blue environment without affecting the entire user base. If metrics are healthy, proceed to the next phase.
  6. Progressive Routing: Gradually increase the percentage of user traffic being served by the Green environment, closely monitoring metrics at each stage.
  7. Final Cutover: Once confident that the Green environment is stable, you can reroute 100% of the traffic from the Blue to the Green environment.
  8. Fallback: Keep the Blue environment operational for a period as a rollback option in case any issues are discovered post-switch.
  9. Decommission or Sync: Eventually, decommission the Blue environment or synchronize it to the Green environment’s state for future deployments.

Automated Testing

CI/CD and phased deployment strategy relies on automated testing to validate changes to a subset of infrastructure or users. The automated testing includes a variety of testing types that should be performed at different stages of the process such as:involved:

  • Functional Testing: testing the new feature or change before initiating phased deployment to make sure it performs its intended function correctly.
  • Security Testing: testing for vulnerabilities, threats, or risks in a software application before phased deployment.
  • Performance Testing: testing how the system performs under heavy loads or large amounts of data before and during phased deployment.
  • Canary Testing: involves rolling out the feature to a small, controlled group before making it broadly available. This also includes testing via synthetic transactions by simulating user requests. This is executed early in the phased deployment process, however testing via synthetic transactions is continuously performed in background.
  • Shadow Testing: In this method, the new code runs alongside the existing system, processing real data requests without affecting the actual system.
  • Chaos Testing: This involves intentionally introducing failures to see how the system reacts. It is usually run after other types of testing have been performed successfully, but before full deployment.
  • Load Testing: test the system under the type of loads it will encounter in the real world before the phased deployment.
  • Stress Testing: attempt to break the system by overwhelming its resources. It is executed late in the phased deployment process, but before full deployment.
  • Penetration Testing: security testing where testers try to ‘hack’ into the system.
  • Usability Testing: testing from the user’s perspective to make sure the application is easy to use in early stages of phased deployment.

Monitoring

Monitoring plays a pivotal role in validating the success of phased deployments, enabling teams to ensure that new features and updates are not just functional, but also reliable, secure, and efficient. By constantly collecting and analyzing metrics, monitoring offers real-time feedback that can inform deployment decisions. Here’s how monitoring can help with the validation of phased deployments:

  • Real-Time Metrics and Feedback: collecting real-time metrics on system performance, user engagement, and error rates.
  • Baking Period Analysis: using a “baking” period where the new code is run but closely monitored for any anomalies.
  • Anomaly Detection: using automated monitoring tools to flag anomalies in real-time, such as a spike in error rates or a drop in user engagement.
  • Benchmarking: establishing performance benchmarks based on historical data.
  • Compliance and Security Monitoring: monitoring for unauthorized data access or other security-related incidents.
  • Log Analysis: using aggregated logs to show granular details about system behavior.
  • User Experience Monitoring: tracking metrics related to user interactions, such as page load times or click-through rates.
  • Load Distribution: monitoring how well the new code handles different volumes of load, especially during peak usage times.
  • Rollback Metrics: tracking of the metrics related to rollback procedures.
  • Feedback Loops: using monitoring data for continuous feedback into the development cycle.

Feature Flags

Feature flags, also known as feature toggles, are a powerful tool in the context of phased deployments. They provide developers and operations teams the ability to turn features on or off without requiring a code deployment. This capability synergizes well with phased deployments by offering even finer control over the feature release process. The benefits of feature flags include:

  • Gradual Rollout: Gradually releasing a new feature to a subset of your user base.
  • Targeted Exposure: Enable targeted exposure of features to specific user segments based on different attributes like geography, user role, etc.
  • Real-world Testing: With feature flags, you can perform canary releases, blue/green deployments, and A/B tests in a live environment without affecting the entire user base.
  • Risk Mitigation: If an issue arises during a phased deployment, a feature can be turned off immediately via its feature flag, preventing any further impact.
  • Easy Rollback: Since feature flags allow for features to be toggled on and off, rolling back a feature that turns out to be problematic is straightforward and doesn’t require a new deployment cycle.
  • Simplified Troubleshooting: Feature flags simplify the troubleshooting process since you can easily isolate problems and understand their impact.
  • CICD Compatibility: Feature flags are often used in conjunction with CI/CD pipelines, allowing for features to be integrated into the main codebase even if they are not yet ready for public release.
  • Conditional Logic: Advanced feature flags can include conditional logic, allowing you to automate the criteria under which features are exposed to users.

A/B Testing

A/B testing, also known as split testing, is an experimental approach used to compare two or more versions of a web page, feature, or other variables to determine which one performs better. In the context of software deployment, A/B testing involves rolling out different variations (A, B, etc.) of a feature or application component to different subsets of users. Metrics like user engagement, conversion rates, or performance indicators are then collected to statistically validate which version is more effective or meets the desired goals better. Phased deployment and A/B testing can complement each other in a number of ways:

  • Both approaches aim to reduce risk but do so in different ways.
  • Both methodologies are user-focused but in different respects.
  • A/B tests offer a more structured way to collect user-related metrics, which can be particularly valuable during phased deployments.
  • Feature flags, often used in both A/B testing and phased deployment, give teams the ability to toggle features on or off for specific user segments or phases.
  • If an A/B test shows one version to be far more resource-intensive than another, this information could be invaluable for phased deployment planning.
  • The feedback from A/B testing can feed into the phased deployment process to make real-time adjustments.
  • A/B testing can be included as a step within a phase of a phased deployment, allowing for user experience quality checks.
  • In a more complex scenario, you could perform A/B testing within each phase of a phased deployment.

Safe Rollback

Safe rollback is a critical aspect of a robust CI/CD pipeline, especially when implementing phased deployments. Here’s how safe rollback can be implemented:

  • Maintain versioned releases of your application so that you can easily identify which version to rollback to.
  • Always have backward-compatible database changes so that rolling back the application won’t have compatibility issues with the database.
  • Utilize feature flags so that you can disable problematic features without needing to rollback the entire deployment.
  • Implement comprehensive monitoring and logging to quickly identify issues that might necessitate a rollback.
  • Automate rollback procedures.
  • Keep the old version (Blue) running as you deploy the new version (Green). If something goes wrong, switch the load balancer back to the old version.
  • Use Canaray releases to roll out the new version to a subset of your infrastructure. If errors occur, halt the rollout and revert the canary servers to the old version.

Following steps should be applied on rollback:

  1. Immediate Rollback: As soon as an issue is detected that can’t be quickly fixed, trigger the rollback procedure.
  2. Switch Load Balancer: In a Blue/Green setup, switch the load balancer back to route traffic to the old version.
  3. Database Rollback: If needed and possible, rollback the database changes. Be very cautious with this step, as it can be risky.
  4. Feature Flag Disablement: If the issue is isolated to a particular feature that’s behind a feature flag, consider disabling that feature.
  5. Validation: After rollback, validate that the system is stable. This should include health checks and possibly smoke tests.
  6. Postmortem Analysis: Once the rollback is complete and the system is stable, conduct a thorough analysis to understand what went wrong.

One critical consideration to keep in mind is ensuring both backward and forward compatibility, especially when altering communication protocols or serialization formats. For instance, if you update the serialization format and the new code writes data in this new format, the old code may become incompatible and unable to read the data if a rollback is needed. To mitigate this risk, you can deploy an intermediate version that is capable of reading the new format without actually writing in it.

Here’s how it works:

  1. Phase 1: Release an intermediate version of the code that can read the new serialization format like JSON, but continues to write in the old format. This ensures that even if you have to roll back after advancing further, the “old” version is still able to read the newly-formatted data.
  2. Phase 2: Once the intermediate version is fully deployed and stable, you can then roll out the new code that writes data in the new format.

By following this two-phase approach, you create a safety net, making it possible to rollback to the previous version without encountering issues related to data format incompatibility.

Safe Rollback when changing data format

Sample CI/CD Pipeline

Following is a sample GitHub Actions workflow .yml file that includes elements for build, test and deployment. You can create a new file in your repository under .github/workflows/ called ci-cd.yml:

name: CI/CD Pipeline with Phased Deployment

on:
  push:
    branches:
      - main

env:
  IMAGE_NAME: my-java-app

jobs:
  
  unit-test:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout Code
        uses: actions/checkout@v2

      - name: Run Unit Tests
        run: mvn test

  integration-test:
    needs: unit-test
    runs-on: ubuntu-latest
    steps:
      - name: Run Integration Tests
        run: mvn integration-test

  functional-test:
    needs: integration-test
    runs-on: ubuntu-latest
    steps:
      - name: Run Functional Tests
        run: ./run-functional-tests.sh  # Assuming you have a script for functional tests

  load-test:
    needs: functional-test
    runs-on: ubuntu-latest
    steps:
      - name: Run Load Tests
        run: ./run-load-tests.sh  # Assuming you have a script for load tests

  security-test:
    needs: load-test
    runs-on: ubuntu-latest
    steps:
      - name: Run Security Tests
        run: ./run-security-tests.sh  # Assuming you have a script for security tests

  build:
    needs: security-test
    runs-on: ubuntu-latest
    steps:
      - name: Build and Package
        run: |
          mvn clean package
          docker build -t ${{ env.IMAGE_NAME }} .

  phase_one:
    needs: build
    runs-on: ubuntu-latest
    steps:
      - name: Deploy to Phase One Cells
        run: ./deploy-to-phase-one.sh # Your custom deploy script for Phase One

      - name: Canary Testing
        run: ./canary-test-phase-one.sh # Your custom canary testing script for Phase One

      - name: Monitoring
        run: ./monitor-phase-one.sh # Your custom monitoring script for Phase One

      - name: Rollback if Needed
        run: ./rollback-phase-one.sh # Your custom rollback script for Phase One
        if: failure()

  phase_two:
    needs: phase_one
    # Repeat the same steps as phase_one but for phase_two
    # ...

  phase_three:
    needs: phase_two
    # Repeat the same steps as previous phases but for phase_three
    # ...

  phase_four:
    needs: phase_three
    # Repeat the same steps as previous phases but for phase_four
    # ...

  phase_five:
    needs: phase_four
    # Repeat the same steps as previous phases but for phase_five
    # ...

run-functional-tests.sh, run-load-tests.sh, and run-security-tests.sh would contain the logic for running functional, load, and security tests, respectively. You might use tools like Selenium for functional tests, JMeter for load tests, and OWASP ZAP for security tests.

Conclusion

Phased deployment, when coupled with effective monitoring, testing, and feature flags, offers numerous benefits that enhance the reliability, security, and overall quality of software releases. Here’s a summary of the advantages:

  1. Reduced Risk: By deploying changes in smaller increments, you minimize the impact of any single failure, thereby reducing the “blast radius” of issues.
  2. Real-Time Validation: Continuous monitoring provides instant feedback on system performance, enabling immediate detection and resolution of issues.
  3. Enhanced User Experience: Phased deployment allows for real-time user experience monitoring, ensuring that new features or changes meet user expectations and don’t negatively impact engagement.
  4. Data-Driven Decision Making: Metrics collected during the “baking” period and subsequent phases allow for data-driven decisions on whether to proceed with the deployment, roll back, or make adjustments.
  5. Security & Compliance: Monitoring for compliance and security ensures that new code doesn’t introduce vulnerabilities, keeping the system secure throughout the deployment process.
  6. Efficient Resource Utilization: The gradual rollout allows teams to assess how the new changes affect system resources, enabling better capacity planning and resource allocation.
  7. Flexible Rollbacks: In the event of a failure, the phased approach makes it easier to roll back changes, minimizing disruption and maintaining system stability.
  8. Iterative Improvement: Metrics and feedback collected can be looped back into the development cycle for ongoing improvements, making future deployments more efficient and reliable.
  9. Optimized Testing: Various forms of testing like functional, security, performance, and canary can be better focused and validated against real-world scenarios in each phase.
  10. Strategic Rollout: Feature flags allow for even more granular control over who sees what changes, enabling targeted deployments and A/B testing.
  11. Enhanced Troubleshooting: With fewer changes deployed at a time, identifying the root cause of any issues becomes simpler, making for faster resolution.
  12. Streamlined Deployment Pipeline: Incorporating phased deployment into CI/CD practices ensures a smoother, more controlled transition from development to production.

By strategically implementing these approaches, phased deployment enhances the resilience and adaptability of the software development lifecycle, ensuring a more robust, secure, and user-friendly product.

August 26, 2023

Modern Software Development Process

Filed under: Methodologies,Project Management,Technology — admin @ 5:20 pm

The technologies, methodologies, and paradigms that have shaped the way software is designed, built, and delivered have evolved from the Waterfall model, proposed in 1970 by Dr. Winston W. Royce to Agile model in early 2000. The Waterfall model was characterized by sequential and rigid phases, detailed documentation, and milestone-based progression and suffered from inflexibility to adapt to changes, longer time-to-market, high risk and uncertainty difficulty in accurately estimating costs and time, silos between teams, and late discovery of issues. The Agile model addresses these shortcomings by adopting iterative and incremental development, continuous delivery, feedback loops, lower risk, and cross-functional teams. In addition, modern software development process over the last few years have adopted DevOps integration, automated testing, microservices and modular architecture, cloud services, infrastructure as code, and data driven decisions to improve the time to market, customer satisfaction, cost efficiency, operational resilience, collaboration, and sustainable development.

Modern Software Development Process

1. Goals

The goals of modern software development process includes:

  1. Speed and Agility: Accelerate the delivery of high-quality software products and updates to respond to market demands and user needs.
  2. Quality and Reliability: Ensure that the software meets specified requirements, performs reliably under all conditions, and is maintainable and scalable.
  3. Reduced Complexity: Simplify complex tasks with use of refactoring and microservices.
  4. Customer-Centricity: Develop features and products that solve real problems for users and meet customer expectations for usability and experience.
  5. Scalability: Design software that can grow and adapt to changing user needs and technological advancements.
  6. Cost-Efficiency: Optimize resource utilization and operational costs through automation and efficient practices.
  7. Security: Ensure the integrity, availability, and confidentiality of data and services.
  8. Innovation: Encourage a culture of innovation and learning, allowing teams to experiment, iterate, and adapt.
  9. Collaboration and Transparency: Foster a collaborative environment among cross-functional teams and maintain transparency with stakeholders.
  10. Employee Satisfaction: Focus on collaboration, autonomy, and mastery can lead to more satisfied, motivated teams.
  11. Learning and Growth: Allows teams to reflect on their wins and losses regularly, fostering a culture of continuous improvement.
  12. Transparency: Promote transparency with regular meetings like daily stand-ups and sprint reviews.
  13. Flexibility: Adapt to changes even late in the development process.
  14. Compliance and Governance: Ensure that software meets industry-specific regulations and standards.
  15. Risk Mitigation: Make it easier to course-correct, reducing the risk of late project failure.
  16. Competitive Advantage: Enables companies to adapt to market changes more swiftly, offering a competitive edge.

2. Elements of Software Development Process

Key elements of the Software Development Cycle include:

  1. Discovery & Ideation: Involves brainstorming, feasibility studies, and stakeholder buy-in. Lean startup methodologies might be applied for new products.
  2. Continuous Planning: Agile roadmaps, iterative sprint planning, and backlog refinement are continuous activities.
  3. User-Centric Design: The design process is iterative, human-centered, and closely aligned with development.
  4. Development & Testing: Emphasizes automated testing, code reviews, and incremental development. CI/CD pipelines automate much of the build and testing process.
  5. Deployment & Monitoring: DevOps practices facilitate automated, reliable deployments. Real-time monitoring tools and AIOps (Artificial Intelligence for IT Operations) can proactively manage system health.
  6. Iterative Feedback Loops: Customer feedback, analytics, and KPIs inform future development cycles.

3. Roles in Modern Processes

Critical Roles in Contemporary Software Development include:

  1. Product Owner/Product Manager: This role serves as the liaison between the business and technical teams. They are responsible for defining the product roadmap, prioritizing features, and ensuring that the project aligns with stakeholder needs and business objectives.
  2. Scrum Master/Agile Coach: These professionals are responsible for facilitating Agile methodologies within the team. They help organize and run sprint planning sessions, stand-ups, and retrospectives.
  3. Development Team: This group is made up of software engineers and developers responsible for the design, coding, and initial testing of the product. They collaborate closely with other roles, especially the Product Owner, to ensure that the delivered features meet the defined requirements.
  4. DevOps Engineers: DevOps Engineers act as the bridge between the development and operations teams. They focus on automating the Continuous Integration/Continuous Deployment (CI/CD) pipeline to ensure that code can be safely, efficiently, and reliably moved from development into production.
  5. QA Engineers: Quality Assurance Engineers play a vital role in the software development life cycle by creating automated tests that are integrated into the CI/CD pipeline.
  6. Data Scientists: These individuals use analytical techniques to draw actionable insights from data generated by the application. They may look at user behavior, application performance, or other metrics to provide valuable feedback that can influence future development cycles or business decisions.
  7. Security Engineers: Security Engineers are tasked with ensuring that the application is secure from various types of threats. They are involved from the early stages of development to ensure that security is integrated into the design (“Security by Design”).

Each of these roles plays a critical part in modern software development, contributing to more efficient processes, higher-quality output, and ultimately, greater business value.

4. Phases of Modern SDLC

In the Agile approach to the software development lifecycle, multiple phases are organized not in a rigid, sequential manner as seen in the Waterfall model, but as part of a more fluid, iterative process. This allows for continuous feedback loops and enables incremental delivery of features. Below is an overview of the various phases that constitute the modern Agile-based software development lifecycle:

4.1 Inception Phase

The Inception phase is the initial stage in a software development project, often seen in frameworks like the Rational Unified Process (RUP) or even in Agile methodologies in a less formal manner. It sets the foundation for the entire project by defining its scope, goals, constraints, and overall vision.

4.1.1 Objectives

The objectives of inception phase includes:

  1. Project Vision and Scope: Define what the project is about, its boundaries, and what it aims to achieve.
  2. Feasibility Study: Assess whether the project is feasible in terms of time, budget, and technical constraints.
  3. Stakeholder Identification: Identify all the people or organizations who have an interest in the project, such as clients, end-users, developers, and more.
  4. Risk Assessment: Evaluate potential risks, like technical challenges, resource limitations, and market risks, and consider how they can be mitigated.
  5. Resource Planning: Preliminary estimation of the resources (time, human capital, budget) required for the project.
  6. Initial Requirements Gathering: Collect high-level requirements, often expressed as epics or user stories, which will be refined later.
  7. Project Roadmap: Create a high-level project roadmap that outlines major milestones and timelines.

4.1.2 Practices

The inception phase includes following practices:

  • Epics Creation: High-level functionalities and features are described as epics.
  • Feasibility Study: Analyze the technical and financial feasibility of the proposed project.
  • Requirements: Identify customers and document requirements that were captured from the customers. Review requirements with all stakeholders and get alignment on key features and deliverables. Be careful for a scope creep so that the software change can be delivered incrementally.
  • Estimated Effort: A high-level effort in terms of estimated man-month, t-shirt size, or story-points to build and deliver the features.
  • Business Metrics: Define key business metrics that will be tracked such as adoption, utilization and success criteria for the features.
  • Customer and Dependencies Impact: Understand how the changes will impact customers as well as upstream/downstream dependencies.
  • Roadmapping: Develop a preliminary roadmap to guide the project’s progression.

4.1.3 Roles and Responsibilities:

The roles and responsibilities in inception phase include:

  • Stakeholders: Validate initial ideas and provide necessary approvals or feedback.
  • Product Owner: Responsible for defining the vision and scope of the product, expressed through a set of epics and an initial product backlog. The product manager interacts with customers, defines the customer problems and works with other stakeholders to find solutions. The product manager defines end-to-end customer experience and how the customer will benefit from the proposed solution.
  • Scrum Master: Facilitates the Inception meetings and ensures that all team members understand the project’s objectives and scope. The Scrum master helps write use-cases, user-stories and functional/non-functional requirements into the Sprint Backlog.
  • Development Team: Provide feedback on technical feasibility and initial estimates for the project scope.

By carefully executing the Inception phase, the team can ensure that everyone has a clear understanding of the ‘what,’ ‘why,’ and ‘how’ of the project, thereby setting the stage for a more organized and focused development process.

4.2. Planning Phase

The Planning phase is an essential stage in the Software Development Life Cycle (SDLC), especially in methodologies that adopt Agile frameworks like Scrum or Kanban. This phase aims to detail the “how” of the project, which is built upon the “what” and “why” established during the Inception phase. The Planning phase involves specifying objectives, estimating timelines, allocating resources, and setting performance measures, among other activities.

4.2.1 Objectives

The objectives of planning phase includes:

  1. Sprint Planning: Determine the scope of the next sprint, specifying what tasks will be completed.
  2. Release Planning: Define the features and components that will be developed and released in the upcoming iterations.
  3. Resource Allocation: Assign tasks and responsibilities to team members based on skills and availability.
  4. Timeline and Effort Estimation: Define the timeframe within which different tasks and sprints should be completed.
  5. Risk Management: Identify, assess, and plan for risks and contingencies.
  6. Technical Planning: Decide the architectural layout, technologies to be used, and integration points, among other technical aspects.

4.2.2 Practices:

The planning phase includes following practices:

  • Sprint Planning: A meeting where the team decides what to accomplish in the coming sprint.
  • Release Planning: High-level planning to set goals and timelines for upcoming releases.
  • Backlog Refinement: Ongoing process to add details, estimates, and order to items in the Product Backlog.
  • Effort Estimation: Techniques like story points, t-shirt sizes, or time-based estimates to assess how much work is involved in a given task or story.

4.2.3 Roles and Responsibilities:

The roles and responsibilities in planning phase include:

  • Product Owner: Prioritizes the Product Backlog and clarifies details of backlog items. Helps the team understand which items are most crucial to the project’s success.
  • Scrum Master: Facilitates planning meetings and ensures that the team has what they need to complete their tasks. Addresses any impediments that the team might face.
  • Development Team: Provides effort estimates for backlog items, helps break down stories into tasks, and commits to the amount of work they can accomplish in the next sprint.
  • Stakeholders: May provide input on feature priority or business requirements, though they typically don’t participate in the detailed planning activities.

4.2.4 Key Deliverables:

Key deliverables in planning phase include:

  • Sprint Backlog: A list of tasks and user stories that the development team commits to complete in the next sprint.
  • Execution Plan: A plan outlining the execution of the software development including dependencies from other teams. The plan also identifies key reversible and irreversible decisions.
  • Deployment and Release Plan: A high-level plan outlining the features and major components to be released over a series of iterations. This plan also includes feature flags and other configuration parameters such as throttling limits.
  • Resource Allocation Chart: A detailed breakdown of who is doing what.
  • Risk Management Plan: A documentation outlining identified risks, their severity, and mitigation strategies.

By thoroughly engaging in the Planning phase, teams can have a clear roadmap for what needs to be achieved, how it will be done, who will do it, and when it will be completed. This sets the project on a course that maximizes efficiency, minimizes risks, and aligns closely with the stakeholder requirements and business objectives.

4.3. Design Phase

The Design phase in the Software Development Life Cycle (SDLC) is a critical step that comes after planning and before coding. During this phase, the high-level architecture and detailed design of the software system are developed. This serves as a blueprint for the construction of the system, helping the development team understand how the software components will interact, what data structures will be used, and how user interfaces will be implemented, among other things.

4.3.1 Objectives:

The objectives of design phase includes:

  1. Architectural Design: Define the software’s high-level structure and interactions between components. Understand upstream and downstream dependencies and how architecture decisions affect those dependencies. The architecture will ensure scalability to a minimum of 2x of the peak traffic and will build redundancy to avoid single points of failure. The architecture decisions should be documented in an Architecture Decision Record format and should be defined in terms of reversible and irreversible decisions.
  2. Low-Level Design: Break down the architectural components into detailed design specifications including functional and non-functional considerations. The design documents should include alternative solutions and tradeoffs when selecting a recommended approach. The design document should consider how the software can be delivered incrementally and whether multiple components can be developed in parallel. Other best practices such as not breaking backward compatibility, composable modules, consistency, idempotency, pagination, no unbounded operations, validation, and purposeful design should be applied to the software design.
  3. API Design: Specify the endpoints, request/response models, and the underlying architecture for any APIs that the system will expose.
  4. User Interface Design: Design the visual elements and user experiences.
  5. Data Model Design: Define the data structures and how data will be stored, accessed, and managed.
  6. Functional Design: Elaborate on the algorithms, interactions, and methods that will implement software features.
  7. Security Design: Implement security measures like encryption, authentication, and authorization.
  8. Performance Considerations: Design for scalability, reliability, and performance optimization.
  9. Design for Testability: Ensure elements can be easily tested.
  10. Operational Excellence: Document monitoring, logging, operational and business metrics, alarms, and dashboards to monitor health of the software components.

4.3.2 Practices:

The design phase includes following practices:

  • Architectural and Low-Level Design Reviews: Conducted to validate the robustness, scalability, and performance of the design. High-level reviews focus on the system architecture, while low-level reviews dig deep into modules, classes, and interfaces.
  • API Design Review: A specialized review to ensure that the API design adheres to RESTful principles, is consistent, and meets performance and security benchmarks.
  • Accessibility Review: Review UI for accessibility support.
  • Design Patterns: Utilize recognized solutions for common design issues.
  • Wireframing and Prototyping: For UI/UX design.
  • Sprint Zero: Sometimes used in Agile for initial setup, including design activities.
  • Backlog Refinement: Further details may be added to backlog items in the form of acceptance criteria and design specs.

4.3.3 Roles and Responsibilities:

The roles and responsibilities in design phase include:

  • Product Owner: Defines the acceptance criteria for user stories and validates that the design meets business and functional requirements.
  • Scrum Master: Facilitates design discussions and ensures that design reviews are conducted effectively.
  • Development Team: Creates the high-level architecture, low-level detailed design, and API design. Chooses design patterns and data models.
  • UX/UI Designers: Responsible for user interface and user experience design.
  • System Architects: Involved in high-level design and may also review low-level designs and API specifications.
  • API Designers: Specialized role focusing on API design aspects such as endpoint definition, rate limiting, and request/response models.
  • QA Engineers: Participate in design reviews to ensure that the system is designed for testability.
  • Security Engineers: Involved in both high-level and low-level design reviews to ensure that security measures are adequately implemented.

4.3.4 Key Deliverables:

Key deliverables in design phase include:

  • High-Level Architecture Document: Describes the architectural layout and high-level components.
  • Low-Level Design Document: Provides intricate details of each module, including pseudo-code, data structures, and algorithms.
  • API Design Specification: Comprehensive documentation for the API including endpoints, methods, request/response formats, and error codes.
  • Database Design: Includes entity-relationship diagrams, schema designs, etc.
  • UI Mockups: Prototypes or wireframes of the user interface.
  • Design Review Reports: Summaries of findings from high-level, low-level, and API design reviews.
  • Proof of Concept and Spikes: Build proof of concepts or execute spikes to learn about high risk design components.
  • Architecture Decision Records: Documents key decisions that were made including tradeoffs, constraints, stakeholders and final decision.

The Design phase serves as the blueprint for the entire project. Well-thought-out and clear design documentation helps mitigate risks, provides a clear vision for the development team, and sets the stage for efficient and effective coding, testing, and maintenance.

4.4 Development Phase

The Development phase in the Software Development Life Cycle (SDLC) is the stage where the actual code for the software is written, based on the detailed designs and requirements specified in the preceding phases. This phase involves coding, unit testing, integration, and sometimes even preliminary system testing to validate that the implemented features meet the specifications.

4.4.1 Objectives:

The objectives of development phase includes:

  1. Coding: Translate design documentation and specifications into source code.
  2. Modular Development: Build the software in chunks or modules for easier management, testing, and debugging.
  3. Unit Testing: Verify that each unit or module functions as designed.
  4. Integration: Combine separate units and modules into a single, functioning system.
  5. Code Reviews: Validate that code adheres to standards, guidelines, and best practices.
  6. Test Plans: Review test plans for all test-cases that will validate the changes to the software.
  7. Documentation: Produce inline code comments, READMEs, and technical documentation.
  8. Early System Testing: Sometimes, a form of system testing is performed to catch issues early.

4.4.2 Practices:

The development phase includes following practices:

  • Pair Programming: Two developers work together at one workstation, enhancing code quality.
  • Test-Driven Development (TDD): Write failing tests before writing code, then write code to pass the tests.
  • Code Reviews: Peer reviews to maintain code quality.
  • Continuous Integration: Frequently integrate code into a shared repository and run automated tests.
  • Version Control: Use of tools like Git to manage changes and history.
  • Progress Report: Use of burndown charts, risk analysis, blockers and other data to apprise stakeholders up-to-date with the progress of the development effort.

4.4.3 Roles and Responsibilities:

The roles and responsibilities in development phase include:

  • Product Owner: Prioritizes features and stories in the backlog that are to be developed, approves or rejects completed work based on acceptance criteria.
  • Scrum Master: Facilitates daily stand-ups, removes impediments, and ensures the team is focused on the tasks at hand.
  • Software Developers: Write the code, perform unit tests, and integrate modules.
  • QA Engineers: Often involved early in the development phase to write automated test scripts, and to run unit tests along with developers.
  • DevOps Engineers: Manage the CI/CD pipeline to automate code integration and preliminary testing.
  • Technical Leads/Architects: Provide guidance and review code to ensure it aligns with architectural plans.

4.4.4 Key Deliverables:

Key deliverables in development phase include:

  • Source Code: The actual code files, usually stored in a version control system.
  • Unit Test Cases and Results: Documentation showing what unit tests have been run and their results.
  • Code Review Reports: Findings from code reviews.
  • Technical Documentation: Sometimes known as codebooks or developer guides, these detail how the code works for future maintenance.

4.4.5 Best Practices for Development:

Best practices in development phase include:

  • Clean Code: Write code that is easy to read, maintain, and extend.
  • Modularization: Build in a modular fashion for ease of testing, debugging, and scalability.
  • Commenting and Documentation: In-line comments and external documentation to aid in future maintenance and understanding.
  • Code Repositories: Use version control systems to keep a historical record of all changes and to facilitate collaboration.

The Development phase is where the design and planning turn into a tangible software product. Following best practices and guidelines during this phase can significantly impact the quality, maintainability, and scalability of the end product.

4.5. Testing Phase

The Testing phase in the Software Development Life Cycle (SDLC) is crucial for validating the system’s functionality, performance, security, and stability. This phase involves various types of testing methodologies to ensure that the software meets all the requirements and can handle expected and unexpected situations gracefully.

4.5.1 Objectives:

The objectives of testing phase includes:

  1. Validate Functionality: Ensure all features work as specified in the requirements.
  2. Ensure Quality: Confirm that the system is reliable, secure, and performs optimally.
  3. Identify Defects: Find issues that need to be addressed before the software is deployed.
  4. Risk Mitigation: Assess and mitigate potential security and performance risks.

4.5.2 Practices:

The testing phase includes following practices:

  • Test Planning: Create a comprehensive test plan detailing types of tests, scope, schedule, and responsibilities.
  • Test Case Design: Prepare test cases based on various testing requirements.
  • Test Automation: Automate repetitive and time-consuming tests.
  • Continuous Testing: Integrate testing into the CI/CD pipeline for continuous feedback.

4.5.3 Types of Testing and Responsibilities:

The test phase includes following types of testing:

  1. Integration Testing:
    Responsibilities: QA Engineers, DevOps Engineers
    Objective: Verify that different modules or services work together as expected.
  2. Functional Testing:
    Responsibilities: QA Engineers
    Objective: Validate that the system performs all the functions as described in the specifications.
  3. Load/Stress Testing:
    Responsibilities: Performance Test Engineers, DevOps Engineers
    Objective: Check how the system performs under heavy loads.
  4. Security and Penetration Testing:
    Responsibilities: Security Engineers, Ethical Hackers
    Objective: Identify vulnerabilities that an attacker could exploit.
  5. Canary Testing:
    Responsibilities: DevOps Engineers, Product Owners
    Objective: Validate that new features or updates will perform well with a subset of the production environment before full deployment.
  6. Other Forms of Testing:
    Smoke Testing: Quick tests to verify basic functionality after a new build or deployment.
    Regression Testing: Ensure new changes haven’t adversely affected existing functionalities.
    User Acceptance Testing (UAT): Final validation that the system meets business needs, usually performed by stakeholders or end-users.

4.5.4 Key Deliverables:

  • Test Plan: Describes the scope, approach, resources, and schedule for the testing activities.
  • Test Cases: Detailed description of what to test, how to test, and expected outcomes.
  • Test Scripts: Automated scripts for testing.
  • Test Reports: Summary of testing activities, findings, and results.

4.5.5 Best Practices for Testing:

  • Early Involvement: Involve testers from the early stages of SDLC for better understanding and planning.
  • Code Reviews: Conduct code reviews to catch potential issues before the testing phase.
  • Data-Driven Testing: Use different sets of data to evaluate how the application performs with various inputs.
  • Monitoring and Feedback Loops: Integrate monitoring tools to get real-time insights during testing and improve the test scenarios continuously.

The Testing phase is crucial for delivering a reliable, secure, and robust software product. The comprehensive testing strategy that includes various types of tests ensures that the software is well-vetted before it reaches the end-users.

4.6. Deployment Phase

The Deployment phase in the Software Development Life Cycle (SDLC) involves transferring the well-tested codebase from the staging environment to the production environment, making the application accessible to the end-users. This phase aims to ensure a smooth transition of the application from development to production with minimal disruptions and optimal performance.

4.6.1 Objectives:

The objectives of deployment phase includes:

  1. Release Management: Ensure that the application is packaged and released properly.
  2. Transition: Smoothly transition the application from staging to production.
  3. Scalability: Ensure the infrastructure can handle the production load.
  4. Rollback Plans: Prepare contingencies for reverting changes in case of failure.

4.6.2 Key Components and Practices:

The key components and practices in deployment phase include:

  1. Continuous Deployment (CD):
    Responsibilities: DevOps Engineers, QA Engineers, Developers
    Objective: Automatically deploy all code changes to the production environment that pass the automated testing phase.
    Tools: Jenkins, GitLab CI/CD, GitHub Actions, Spinnaker
  2. Infrastructure as Code (IaC):
    Responsibilities: DevOps Engineers, System Administrators
    Objective: Automate the provisioning and management of infrastructure using code.
    Tools: Terraform, AWS CloudFormation, Ansible
  3. Canary Testing:
    Responsibilities: DevOps Engineers, QA Engineers
    Objective: Gradually roll out the new features to a subset of users before a full-scale rollout to identify potential issues.
  4. Phased Deployment:
    Responsibilities: DevOps Engineers, Product Managers
    Objective: Deploy the application in phases to monitor its stability and performance, enabling easier troubleshooting.
  5. Rollback:
    Responsibilities: DevOps Engineers
    Objective: Be prepared to revert the application to its previous stable state in case of any issues.
  6. Feature Flags:
    Responsibilities: Developers, Product Managers
    Objective: Enable or disable features in real-time without deploying new code.
  7. Resilience:
    Responsibilities: DevOps Engineers, Developers
    Objective: Prevent retry storms, throttle the number of client requests, and apply reliability patterns such as Circuit Breakers and BulkHeads. Avoid bimodal behavior and maintain software latency and throughput within the defined SLAs.
  8. Scalability:
    Responsibilities: DevOps Engineers, Developers
    Objective: Monitor scalability limits and implement elastic scalability. Review scaling limits periodically.
  9. Throttling and Sharding Limits (and other Configurations):
    Responsibilities: DevOps Engineers, Developers
    Objective: Review configuration and performance configurations such as throttling and sharding limits.
  10. Access Policies:
    Responsibilities: DevOps Engineers, Developers
    Objective: Review access policies, permissions and roles that can access the software.

4.6.3 Roles and Responsibilities:

The deployment phase includes following roles and responsibilities:

  • DevOps Engineers: Manage CI/CD pipelines, IaC, and automate deployments.
  • Product Managers: Approve deployments based on feature completeness and business readiness.
  • QA Engineers: Ensure the application passes all tests in the staging environment before deployment.
  • System Administrators: Ensure that the infrastructure is ready and scalable for deployment.

4.6.4 Key Deliverables:

The deployment phase includes following key deliverables:

  • Deployment Checklist: A comprehensive list of tasks and checks before, during, and after deployment.
  • CD Pipeline Configuration: The setup details for the Continuous Deployment pipeline.
  • Automated Test Cases: Essential for validating the application before automatic deployment.
  • Operational, Security and Compliance Review Documents: These documents will define checklists about operational excellence, security and compliance support in the software.
  • Deployment Logs: Automated records of what was deployed, when, and by whom (usually by the automation system).
  • Logging: Review data that will be logged and respective log levels so that data privacy is not violated and excessive logging is avoided.
  • Deployment Schedule: A timeline for the deployment process.
  • Observability: Health dashboard to monitor operational and business metrics, and alarms to receive notifications when service-level-objectives (SLO) are violated. The operational metrics will include availability, latency, and error metrics. The health dashboard will monitor utilization of CPU, disk space, memory, and network resources.
  • Rollback Plan: A detailed plan for reverting to the previous stable version if necessary.

4.6.5 Best Practices:

Following are a few best practices for the deployment phase:

  • Automated Deployments: Use automation tools for deployment of code and infrastructure to minimize human error.
  • Monitoring and Alerts: Use monitoring tools to get real-time insights into application performance and set up alerts for anomalies.
  • Version Control: Ensure that all deployable artifacts are versioned for traceability.
  • Comprehensive Testing: Given the automated nature of Continuous Deployment, having a comprehensive suite of automated tests is crucial.
  • Rollback Strategy: Have an automated rollback strategy in case the new changes result in system failures or critical bugs.
  • Feature Toggles: Use feature flags to control the release of new features, which can be enabled or disabled without having to redeploy.
  • Audit Trails: Maintain logs and history for compliance and to understand what was deployed and when.
  • Documentation: Keep detailed records of the deployment process, configurations, and changes.
  • Stakeholder Communication: Keep all stakeholders in the loop regarding deployment schedules, success, or issues.
  • Feedback from Early Adopters: If the software is being released to internal or a beta customers, then capture feedback from those early adopters including any bugs report.
  • Marketing and external communication: The release may need to be coordinated with a marketing campaign so that customers can be notified about new features.

The Deployment phase is critical for ensuring that the software is reliably and securely accessible by the end-users. A well-planned and executed deployment strategy minimizes risks and disruptions, leading to a more dependable software product.

4.7 Maintenance Phase

The Maintenance phase in the Software Development Life Cycle (SDLC) is the ongoing process of ensuring the software’s continued effective operation and performance after its release to the production environment. The objective of this phase is to sustain the software in a reliable state, provide continuous support, and make iterative improvements or patches as needed.

4.7.1 Objectives:

The objectives of maintenance phase includes:

  1. Bug Fixes: Address any issues or defects that arise post-deployment.
  2. Updates & Patches: Release minor or major updates and patches to improve functionality or security.
  3. Optimization: Tune the performance of the application based on metrics and feedback.
  4. Scalability: Ensure that the software can handle growth in terms of users, data, and transaction volume.
  5. Documentation: Update all documents related to software changes and configurations.

4.7.2 Key Components and Practices:

The key components and practices of maintenance phase include:

  1. Incident Management:
    Responsibilities: Support Team, DevOps Engineers
    Objective: Handle and resolve incidents affecting the production environment.
  2. Technical Debt Management:
    Responsibilities: Development Team, Product Managers
    Objective: Prioritize and resolve accumulated technical debt to maintain code quality and performance.
  3. Security Updates:
    Responsibilities: Security Engineers, DevOps Engineers
    Objective: Regularly update and patch the software to safeguard against security vulnerabilities.
  4. Monitoring & Analytics:
    Responsibilities: DevOps Engineers, Data Analysts
    Objective: Continuously monitor software performance, availability, and usage to inform maintenance tasks.
  5. Documentation and Runbooks:
    Responsibilities: Support Team, DevOps Engineers
    Objective: Define cookbooks for development processes and operational issues.

4.7.3 Roles and Responsibilities:

The maintenance phase includes following roles and responsibilities:

  • DevOps Engineers: Monitor system health, handle deployments for updates, and coordinate with the support team for incident resolution.
  • Support Team: Provide customer support, report bugs, and assist in reproducing issues for the development team.
  • Development Team: Develop fixes, improvements, and updates based on incident reports, performance metrics, and stakeholder feedback.
  • Product Managers: Prioritize maintenance tasks based on customer needs, business objectives, and technical requirements.
  • Security Engineers: Regularly audit the software for vulnerabilities and apply necessary security patches.

4.7.4 Key Deliverables:

Key deliverables for the maintenance phase include:

  • Maintenance Plan: A detailed plan outlining maintenance activities, schedules, and responsible parties.
  • Patch Notes: Documentation describing what has been fixed or updated in each new release.
  • Performance Reports: Regular reports detailing the operational performance of the software.

4.7.5 Best Practices for Maintenance:

Following are a few best practices for the maintenance phase:

  • Automated Monitoring: Use tools like Grafana, Prometheus, or Zabbix for real-time monitoring of system health.
  • Feedback Loops: Use customer feedback and analytics to prioritize maintenance activities.
  • Version Control: Always version updates and patches for better tracking and rollback capabilities.
  • Knowledge Base: Maintain a repository of common issues and resolutions to accelerate incident management.
  • Scheduled Maintenance: Inform users about planned downtimes for updates and maintenance activities.

The Maintenance phase is crucial for ensuring that the software continues to meet user needs and operate effectively in the ever-changing technical and business landscape. Proper maintenance ensures the longevity, reliability, and continual improvement of the software product. Also, note that though maintenance is defined as a separate phase above but it will include other phases from inception to deployment and each change will be developed and deployed incrementally in iterative agile process.

5. Best Practices for Ensuring Reliability, Quality, and Incremental Delivery

Following best practices ensure incremental delivery with higher reliability and quality:

5.1. Iterative Development:

Embrace the Agile principle of delivering functional software frequently. The focus should be on breaking down the product into small, manageable pieces and improving it in regular iterations, usually two to four-week sprints.

  • Tools & Techniques: Feature decomposition, Sprint planning, Short development cycles.
  • Benefits: Faster time to market, easier bug isolation, and tracking, ability to incorporate user feedback quickly.

5.2. Automated Testing:

Implement Test-Driven Development (TDD) or Behavior-Driven Development (BDD) to script tests before the actual coding begins. Maintain these tests to run automatically every time a change is made.

  • Tools & Techniques: JUnit, Selenium, Cucumber.
  • Benefits: Instant feedback on code quality, regression testing, enhanced code reliability.

5.3. Design Review:

  • Detailed Explanation: A formal process where architects and developers evaluate high-level and low-level design documents to ensure they meet the project requirements, are scalable, and adhere to best practices.
  • Tools & Techniques: Design diagrams, UML, Peer Reviews, Design Review Checklists.
  • Benefits: Early identification of design flaws, alignment with stakeholders, consistency across the system architecture.

5.4 Code Reviews:

Before any code gets merged into the main repository, it should be rigorously reviewed by other developers to ensure it adheres to coding standards, is optimized, and is free of bugs.

  • Tools & Techniques: Git Pull Requests, Code Review Checklists, Pair Programming.
  • Benefits: Team-wide code consistency, early detection of anti-patterns, and a secondary check for overlooked issues.

5.5 Security Review:

A comprehensive evaluation of the security aspects of the application, involving both static and dynamic analyses, is conducted to identify potential vulnerabilities.

  • Tools & Techniques: OWASP Top 10, Security Scanners like Nessus or Qualys, Code Review tools like Fortify, Penetration Testing.
  • Benefits: Proactive identification of security vulnerabilities, adherence to security best practices, and compliance with legal requirements.

5.6 Operational Review:

Before deploying any new features or services, assess the readiness of the operational environment, including infrastructure, data backup, monitoring, and support plans.

  • Tools & Techniques: Infrastructure as Code tools like Terraform, Monitoring tools like Grafana, Documentation.
  • Benefits: Ensures the system is ready for production, mitigates operational risks, confirms that deployment and rollback strategies are in place.

5.7 CI/CD (Continuous Integration and Continuous Deployment):

Integrate all development work frequently and deliver changes to the end-users reliably and rapidly using automated pipelines.

  • Tools & Techniques: Jenkins, GitLab CI/CD, Docker, Kubernetes.
  • Benefits: Quicker discovery of integration bugs, reduced lead time for feature releases, and improved deployability.

5.8 Monitoring:

Implement sophisticated monitoring tools that continuously observe system performance, user activity, and operational health.

  • Tools & Techniques: Grafana, Prometheus, New Relic.
  • Benefits: Real-time data analytics, early identification of issues, and rich metrics for performance tuning.

5.9 Retrospectives:

At the end of every sprint or project phase, the team should convene to discuss what worked well, what didn’t, and how processes can be improved.

  • Tools & Techniques: Post-its for brainstorming, Sprint Retrospective templates, Voting mechanisms.
  • Benefits: Continuous process improvement, team alignment, and reflection on the project’s success and failures.

5.10 Product Backlog Management:

A live document containing all known requirements, ranked by priority and constantly refined to reflect changes and learnings.

  • Tools & Techniques: JIRA, Asana, Scrum boards.
  • Benefits: Focused development on high-impact features, adaptability to market or user needs.

5.11. Kanban for Maintenance:

For ongoing maintenance work and technical debt management, utilize a Kanban system to visualize work, limit work-in-progress, and maximize efficiency.

  • Tools & Techniques: Kanban boards, JIRA Kanban, Trello.
  • Benefits: Dynamic prioritization, quicker task completion, and efficient resource utilization.

5.12 Feature Flags:

Feature flags allow developers to toggle the availability of certain functionalities without deploying new versions.

  • Tools & Techniques: LaunchDarkly, Config files, Custom-built feature toggles.
  • Benefits: Risk mitigation during deployments, simpler rollbacks, and fine-grained control over feature releases.

5.13 Documentation:

Create comprehensive documentation, ranging from code comments and API docs to high-level architecture guides and FAQs for users.

  • Tools & Techniques: Wiki platforms, OpenAPI/Swagger for API documentation, Code comments.
  • Benefits: Streamlined onboarding for new developers, easier troubleshooting, and long-term code maintainability.

By diligently applying these multi-layered Agile best practices and clearly defined roles, your SDLC will be a well-oiled machine—more capable of rapid iterations, quality deliverables, and high adaptability to market or user changes.

6. Conclusion

The Agile process combined with modern software development practices offers an integrated and robust framework for building software that is adaptable, scalable, and high-quality. This approach is geared towards achieving excellence at every phase of the Software Development Life Cycle (SDLC), from inception to deployment and maintenance. The key benefits of this modern software development process includes flexibility and adaptability, reduced time-to-market, enhanced quality, operational efficiency, risk mitigation, continuous feedback, transparency, collaboration, cost-effectiveness, compliance and governance, and documentation for sustainment. By leveraging these Agile and modern software development practices, organizations can produce software solutions that are not only high-quality and reliable but also flexible enough to adapt to ever-changing requirements and market conditions.

August 23, 2023

Failures in MicroService Architecture

Filed under: Computing,Microservices — admin @ 12:54 pm

Microservice architecture is an evolution of Monolithic and Service-Oriented Architecture (SOA), where an application is built as a collection of loosely coupled, independently deployable services. Each microservice usually corresponds to a specific business functionality and can be developed, deployed, and scaled independently. In contrast to Monolithic Architecture that lacks modularity, and Service-Oriented Architecture (SOA), which is more coarse-grained and is prone to a single point of failure, the Microservice architecture offers better support for modularity, independent deployment and distributed development that often uses Conway’s law to organize teams based on the Microservice architecture. However, Microservice architecture introduces several challenges in terms of:

  • Network Complexity: Microservices communicate over the network, increasing the likelihood of network-related issues (See Fallacies of distributed computing).
  • Distributed System Challenges: Managing a distributed system introduces complexities in terms of synchronization, data consistency, and handling partial failures.
  • Monitoring and Troubleshooting: Due to the distributed nature, monitoring and troubleshooting can become more complex, requiring specialized tools and practices.
  • Potential for Cascading Failures: Failure in one service can lead to failures in dependent services if not handled properly.
Microservices Challenges

Faults, Errors and Failures

The challenges associated with microservice architecture manifest at different stages require understanding concepts of faults, errors and failures:

1. Faults:

Faults in a microservice architecture could originate from various sources, including:

  • Software Bugs: A defect in one service may cause incorrect behavior but remain dormant until triggered.
  • Network Issues: Problems in network connectivity can be considered faults, waiting to lead to errors.
  • Configuration Mistakes: Incorrect configuration of a service is another potential fault.
  • Dependency Vulnerabilities: A weakness or vulnerability in an underlying library or service that hasn’t yet caused a problem.

Following are major concerns that the Microservice architecture must address for managing faults:

  • Loose Coupling and Independence: With services being independent, a fault in one may not necessarily impact others, provided the system is designed with proper isolation.
  • Complexity: Managing and predicting faults across multiple services and their interactions can be complex.
  • Isolation: Properly isolating faults can prevent them from causing widespread problems. For example, a fault in one service shouldn’t impact others if isolation is well implemented.
  • Detecting and Managing Faults: Given the distributed nature of microservices, detecting and managing faults can be complex.

2. Error:

When a fault gets activated under certain conditions, it leads to an error. In microservices, errors can manifest as:

  • Communication Errors: Failure in service-to-service communication due to network problems or incompatible data formats.
  • Data Inconsistency: An error in one service leading to inconsistent data across different parts of the system.
  • Service Unavailability: A service failing to respond due to an internal error.

Microservice architecture should include diagnosing and handling errors including:

  • Propagation: Errors can propagate quickly across services, leading to cascading failures if not handled properly.
  • Transient Errors: Network-related or temporary errors might be resolved by retries, adding complexity to error handling.
  • Monitoring and Logging Challenges: Understanding and diagnosing errors in a distributed system can be more complex.

3. Failure:

Failure is the inability of a system to perform its required function due to unhandled errors. In microservices, this might include:

  • Partial Failure: Failure of one or more services leading to degradation in functionality.
  • Total System Failure: Cascading errors causing the entire system to become unavailable.

Further, failure handling in Microservice architecture poses additional challenges such as:

  • Cascading Failures: A failure in one service might lead to failures in others, particularly if dependencies are tightly interwoven and error handling is insufficient.
  • Complexity in Recovery: Coordinating recovery across multiple services can be challenging.

The faults and errors can be further categorized into customer related and system related:

  • Customer-Related: These may include improper usage of an API, incorrect input data, or any other incorrect action taken by the client. These might include incorrect input data, calling an endpoint that doesn’t exist, or attempting an action without proper authorization. Since these errors are often due to incorrect usage, simply retrying the same request without fixing the underlying issue is unlikely to resolve the error. For example, if a customer sends an invalid parameter, retrying the request with the same invalid parameter will produce the same error. In many cases, customer errors are returned with specific HTTP status codes in the 4xx range (e.g., 400 Bad Request, 403 Forbidden), indicating that the client must modify the request before retrying.
  • System-Related: These can stem from various aspects of the microservices, such as coding bugs, network misconfigurations, a timeout occurring, or issues with underlying hardware. These errors are typically not the fault of the client and may be transient, meaning they could resolve themselves over time or upon retrying. System errors often correlate with HTTP status codes in the 5xx range (e.g., 500 Internal Server Error, 503 Service Unavailable), indicating an issue on the server side. In many cases, these requests can be retried after a short delay, possibly succeeding if the underlying issue was temporary.

Causes Related to Faults, Errors and Failures

The challenges in microservice architecture are rooted in its distributed nature, complexity, and interdependence of services. Here are common causes of the challenges related to faults, errors, and failures, including the distinction between customer and system errors:

1. Network Complexity:

  • Cause: Multiple services communicating over the network where one of the service cannot communicate with other service. For example, Amazon Simple Storage Service (S3) had an outage in Feb 28, 2017 and many services that were tightly coupled failed as well due to limited fault isolation. The post-mortem analysis recommended proper fault isolation, redundancy across regions, and better understanding and managing complex inter-service dependencies.
  • Challenges: Leads to network-related issues, such as latency, bandwidth limitations, and network partitioning, causing both system errors and potentially triggering faults.

2. Data Consistency:

  • Cause: Maintaining data consistency across services that use different databases. This can occur where a microservice stores data in multiple data stores without proper anti-entropy validation or uses eventual consistency, e.g. a trading firm might be using CQRS pattern where transaction events are persisted in a write datastore, which is then replicated to a query datastore so user may not see up-to-date data when querying recently stored data.
  • Challenges: Ensuring transactional integrity and eventual consistency can be complex, leading to system errors if not managed properly.

3. Service Dependencies:

  • Cause: Tight coupling between services. For example, an online travel booking platform might deploy multiple microservices for managing hotel bookings, flight reservations, car rentals, etc. If these services are tightly coupled, then a minor update to the flight reservation service unintentionally may break the compatibility with the hotel booking service. 
  • Challenges: Cascading failures and difficulty in isolating faults. A failure in one service can easily propagate to others if not properly isolated.

4. Scalability Issues:

  • Cause: Individual services may require different scaling strategies. For example, Netflix in Oct 29, 2012 suffered a major outage when due to a scaling issue, the Amazon Elastic Load Balancer (ELB) that was used for routing couldn’t route requests effectively. The lessons learned from the incident included improved scaling strategies, redundancy and failover planning, and monitoring and alerting enhancements.
  • Challenges: Implementing effective scaling without affecting other services or overall system stability. Mismanagement can lead to system errors or even failures.

5. Security Concerns:

  • Cause: Protecting the integrity and confidentiality of data as it moves between services. For example, on July 19, 2019, CapitalOne had a major security breach for its data that was stored on AWS. A former AWS employee discovered a misconfigured firewall and exploited it, accessing sensitive customer data. The incident caused significant reputational damage and legal consequences to CapitalOne, which then worked on a broader review of security practices, emphasizing the need for proper configuration, monitoring, and adherence to best practices.
  • Challenges: Security breaches or misconfigurations could be seen as faults, leading to potential system errors or failures.

6. Monitoring and Logging:

  • Cause: The need for proper monitoring and logging across various independent services to gain insights when microservices are misbehaving. For example, if a service is silently behaving erratically, causing intermittent failures for customers will lead to more significant outage and longer time to diagnose and resolve due to lack of proper monitoring and logging.
  • Challenges: Difficulty in tracking and diagnosing both system and customer errors across different services.

7. Configuration Management:

  • Cause: Managing configuration across multiple services. For example, July 20, 2021, WizCase discovered unsecured Amazon S3 buckets containing data from more than 80 US locales, predominantly in New England. The misconfigured S3 buckets included more than 1,000GB of data and more than 1.6 million files. Residents’ actual addresses, telephone numbers, IDs, and tax papers were all exposed due to the attack. On October 5, 2021, Facebook had nearly six hours due to misconfigured DNS and PGP settings. Oasis cites misconfiguration as a top root cause for security incidents and events.
  • Challenges: Mistakes in configuration management can be considered as faults, leading to errors and potentially failures in one or more services.

8. API Misuse (Customer Errors):

  • Cause: Clients using the API incorrectly, sending improper requests. For example, on October 21, 2016, Dyn experienced a massive Distributed Denial of Service (DDoS) attack, rendering a significant portion of the internet inaccessible for several hours. High-profile sites, including Twitter, Reddit, and Netflix, experienced outages. The DDoS attack was primarily driven by the Mirai botnet, which consisted of a large number of compromised Internet of Things (IoT) devices like security cameras, DVRs, and routers. These devices were vulnerable because of default or easily guessable passwords. The attackers took advantage of these compromised devices and used them to send massive amounts of traffic to Dyn’s servers, especially by abusing the devices’ APIs to make repeated and aggressive requests. The lessons learned included better IoT security, strengthening infrastructure and adding API guardrails such as built-in security and rate-limiting.
  • Challenges: Handling these errors gracefully to guide clients in correcting their requests.

9. Service Versioning:

  • Cause: Multiple versions of services running simultaneously. For example, conflicts between the old and new versions may lead to unexpected behavior in the system. Requests routed to the new version might be handled differently than those routed to the old version, causing inconsistencies.
  • Challenges: Compatibility issues between different versions can lead to system errors.

10. Diverse Technology Stack:

  • Cause: Different services might use different languages, frameworks, or technologies. For example, the diverse technology stack may cause problems with inconsistent monitoring and logging, different vulnerability profiles and security patching requirement, leading to increased complexity in managing, monitoring, and securing the entire system.
  • Challenges: Increases complexity in maintaining, scaling, and securing the system, which can lead to faults.

11. Human Factors:

  • Cause: Errors in development, testing, deployment, or operations. For example, Amazon Simple Storage Service (S3) had an outage in Feb 28, 2017, which was caused by a human error during the execution of an operational command. A typo in a command executed by an Amazon team member intended to take a small number of servers offline inadvertently removed more servers than intended.and many services that were tightly coupled failed as well due to limited fault isolation. The post-mortem analysis recommended implementing safeguards against both human errors and system failures.
  • Challenges: Human mistakes can introduce faults, lead to both customer and system errors, and even cause failures if not managed properly.

12. Lack of Adequate Testing:

  • Cause: Insufficient unit, integration, functional, and canary testing. For example, on August 1, 2012, Knight Capital deployed untested software to a production environment, resulting in a malfunction in their automated trading system. The flawed system started buying and selling millions of shares at incorrect prices. Within 45 minutes, the company incurred a loss of $440 million. The code that was deployed to production was not properly tested. It contained old, unused code that should have been removed, and the new code’s interaction with existing systems was not fully understood or verified. The lessons learned included ensuring that all code, especially that which controls critical functions, is thoroughly tested, implementing robust and consistent deployment procedures to ensure that changes are rolled out uniformly across all relevant systems, and having mechanisms in place to quickly detect and halt erroneous behavior, such as a “kill switch” for automated trading systems.
  • Challenges: Leads to undetected faults, resulting in both system and customer errors, and potentially, failures in production.

13. Inadequate Alarms and Health Checks:

  • Cause: Lack of proper monitoring and health check mechanisms. For example, on January 31, 2017, GitLab suffered a severe data loss incident. An engineer accidentally deleted a production database while attempting to address some performance issues. This action resulted in a loss of 300GB of user data. GitLab’s monitoring and alerting system did not properly notify the team of the underlying issues that were affecting database performance. The lack of clear alarms and health checks contributed to the confusion and missteps that led to the incident. The lessons learned included ensuring that health checks and alarms are configured to detect and alert on all critical conditions, and establishing and enforcing clear procedures and protocols for handling critical production systems, including guidelines for dealing with performance issues and other emergencies.
  • Challenges: Delays in identifying and responding to faults and errors, which can exacerbate failures.

14. Lack of Code Review and Quality Control:

  • Cause: Insufficient scrutiny during the development process. For example, on March 14, 2012, the Heartbleed bug was introduced with the release of OpenSSL version 1.0.1 but it was not discovered until April 2014. The bug allowed attackers to read sensitive data from the memory of millions of web servers, potentially exposing passwords, private keys, and other sensitive information. The bug was introduced through a single coding error. There was a lack of rigorous code review process in place to catch such a critical mistake. The lessons learned included implementing a thorough code review process, establishing robust testing and quality control measures to ensure that all code, especially changes to security-critical areas, is rigorously verified.
  • Challenges: Increases the likelihood of introducing faults and bugs into the system, leading to potential errors and failures.

15. Lack of Proper Test Environment:

  • Cause: Absence of a representative testing environment. For example, on August 1, 2012, Knight Capital deployed new software to a production server that contained obsolete and nonfunctional code. This code accidentally got activated, leading to unintended trades flooding the market. The algorithm was buying high and selling low, the exact opposite of a profitable strategy. The company did not have a proper testing environment that accurately reflected the production environment. Therefore, the erroneous code was not caught during the testing phase. The lessons learned included ensuring a robust and realistic testing environment that accurately mimics the production system, implementing strict and well-documented deployment procedures and implementing real-time monitoring and alerting to catch unusual or erroneous system behavior.
  • Challenges: Can lead to unexpected behavior in production due to discrepancies between test and production environments.

16. Elevated Permissions:

  • Cause: Overly permissive access controls. For example, on July 19, 2019, CapitalOne announced that an unauthorized individual had accessed the personal information of approximately 106 million customers and applicants. The breach occurred when a former employee of a third-party contractor exploited a misconfigured firewall, gaining access to data stored on Amazon’s cloud computing platform, AWS. The lessons learned included implementing the principle of least privilege, robust monitoring to detect and alert on suspicious activities quickly, and evaluating the security practices of third-party contractors and vendors.
  • Challenges: Increased risk of security breaches and unauthorized actions, potentially leading to system errors and failures.

17. Single Point of Failure:

  • Cause: Reliance on a single component without redundancy. For example, on January 31, 2017, GitLab experienced a severe data loss incident when an engineer while attempting to remove a secondary database, the primary production database was engineer deleted. The primary production database was a single point of failure in the system. The deletion of this database instantly brought down the entire service. Approximately 300GB of data was permanently lost, including issues, merge requests, user accounts, comments, and more. The lessons learned included eliminating single points of failure, implementing safeguards to protect human error, and testing backups.
  • Challenges: A failure in one part can bring down the entire system, leading to cascading failures.

18. Large Blast Radius:

  • Cause: Lack of proper containment and isolation strategies. For example, on September 4, 2018, the Azure South Central U.S. datacenter experienced a significant outage affecting multiple regions. A severe weather event in the southern United States led to cooling failures in one of Azure’s data centers. Automated systems responded to the cooling failure by shifting loads to a data center in a neighboring region. This transfer was larger and faster than anticipated, leading to an overload in the secondary region. The lessons learned included deep understanding of dependencies and failure modes, limiting the blast radius, and continuous improvements in resilience.
  • Challenges: An error in one part can affect a disproportionate part of the system, magnifying the impact of failures.

19. Throttling and Limits Issues:

  • Cause: Inadequate management of request rates and quotas. For example, on February 28, 2017, AWS S3 experienced a significant disruption in the US-EAST-1 region, causing widespread effects on many dependent systems. A command to take a small number of servers offline for inspection was executed incorrectly, leading to a much larger removal of capacity than intended. Once the servers were inadvertently removed, the S3 subsystems had to be restarted. The restart process included safety checks, which required specific metadata. However, the capacity removal caused these metadata requests to be throttled. Many other systems were dependent on the throttled subsystem, and as the throttling persisted, it led to a cascading failure. The lessons learned included safeguards against human errors, dependency analysis, and testing throttling mechanisms.
  • Challenges: Can lead to service degradation or failure under heavy load.

20. Rushed Releases:

  • Cause: Releasing changes without proper testing or review. For example, on January 31, 2017, GitLab experienced a severe data loss incident. A series of events that started with a rushed release led to an engineer accidentally deleting a production database, resulting in the loss of 300GB of user data. The team was working on addressing performance issues and pushed a release without properly assessing the risks and potential side effects. The team was working on addressing performance issues and pushed a release without properly assessing the risks and potential side effects. The lessons learned included avoiding rushed decisions, clear separation of environments, proper access controls, and robust backup strategy.
  • Challenges: Increases the likelihood of introducing faults and errors into the system.

21. Excessive Logging:

  • Cause: Logging more information than necessary. For example, excessive logs can result in disk space exhaustion, performance degradation, service disruption or high operating cost due to additional network bandwidth and storage costs.
  • Challenges: Can lead to performance degradation and difficulty in identifying relevant information.

22. Circuit Breaker Mismanagement:

  • Cause: Incorrect implementation or tuning of circuit breakers. For example, on November 18, 2014, Microsoft Azure suffered a substantial global outage affecting multiple services. An update to Azure’s Storage Service included a change to the configuration file governing the circuit breaker settings. The flawed update led to an overly aggressive tripping of circuit breakers, which, in turn, led to a loss of access to the blob front-ends. The lessons learned incremental rollouts, thorough testing of configuration changes, clear understanding of component interdependencies.
  • Challenges: Potential system errors or failure to protect the system during abnormal conditions.

23. Retry Mechanism:

  • Cause: Mismanagement of retry logic. For example, on September 20, 2015, an outage in DynamoDB led to widespread disruption across various AWS services. The root cause was traced back to issues related to the retry mechanism. A small error in the system led to a slight increase in latency. Due to an aggressive retry mechanism, the slightly increased latency led to a thundering herd problem where many clients retried their requests almost simultaneously. The absence of jitter (randomization) in the retry delays exacerbated this surge of requests because retries from different clients were synchronized. The lessons learned included proper retry logic with jitter, understanding dependencies, and enhancements to monitoring and alerting.
  • Challenges: Can exacerbate network congestion and failure conditions, particularly without proper jitter implementation.

24. Backward Incompatible Changes:

  • Cause: Introducing changes that are not backward compatible. For example, on August 1, 2012, Knight Capital deployed new software to a production environment. This software was intended to replace old, unused code but was instead activated, triggering old, defective functionality. The new software was not compatible with the existing system, and instead of being deactivated, the old code paths were unintentionally activated. The incorrect software operation caused Knight Capital to loss of over $460 million in just 45 minutes. The lessons learned included proper testing, processes for deprecating old code, and robust monitoring and rapid response mechanism.
  • Challenges: Can break existing clients and other services, leading to system errors.

25. Inadequate Capacity Planning:

  • Cause: Failure to plan for growth or spikes in usage. For example, on October 21, 2018, GitHub experienced a major outage that lasted for over 24 hours. During this period, various services within GitHub were unavailable or severely degraded. The incident was caused by inadequate capacity planning as GitHub’s database was operating close to its capacity. A routine maintenance task to replace a failing 100G network link set off a series of events that caused the database to failover to a secondary. This secondary didn’t have enough capacity to handle the production load, leading to cascading failures. The lessons learned included capacity planning, regular review of automated systems and building redundancy in critical components.
  • Challenges: Can lead to system degradation or failure under increased load.

26. Lack of Failover Isolation:

  • Cause: Insufficient isolation between primary and failover mechanisms. For example, on September 4, 2018, the Azure South Central U.S. datacenter experienced a significant outage. The Incident was caused by a lightning, which resulted in a voltage swell that impacted the cooling systems, causing them to shut down. Many services that were only hosted in this particular region went down completely, showing a lack of failover isolation between regions. The lessons learned included redundancy in critical systems, cross-region failover strategies, and regular testing of failover procedures.
  • Challenges: Can lead to cascading failures if both primary and failover systems are affected simultaneously.

27. Noise in Metrics and Alarms:

  • Cause: Too many irrelevant or overly sensitive alarms and metrics. Over time, the number of metrics and alarms may grow to a point where there are thousands of alerts firing every day, many of them false positives or insignificant. The noise level in the alerting system becomes overwhelming. For example, if many alarms are set with thresholds too close to regular operating parameters, they may cause frequent false positives. The operations team became desensitized to alerts, treating them as “normal.” The lessons learned include focusing on the most meaningful metrics and alerts, and regular review and adjust alarm thresholds and relevance to ensure they remain meaningful.
  • Challenges: Can lead to alert fatigue and hinder the prompt detection and response to real issues, increasing the risk of system errors and failures going unaddressed.

28. Variations Across Environments:

  • Cause: Differences between development, staging, and production environments. For example, a development team might be using development, testing, staging, and production environments, allowing them to safely develop, test, and deploy their services. However, production environment might be using different versions of database or middleware, using different network topology or production data is different, causing unexpected behaviors that leads to a significant outage.
  • Challenges: May lead to unexpected behavior and system errors, as code behaves differently in production compared to the test environment.

29. Inadequate Training or Documentation:

  • Cause: Lack of proper training, guidelines, or documentation for developers and operations teams. For example, if the internal team is not properly trained on the complexities of the microservices architecture, it can lead to misunderstandings of how services interact. Without proper training or documentation, the team may take a significant amount of time to identify the root causes of the issues.
  • Challenges: Can lead to human-induced faults, misconfiguration, and inadequate response to incidents, resulting in errors and failures.

30. Self-Inflicted Traffic Surge:

  • Cause: Uncontrolled or unexpected increase in internal traffic, such as excessive inter-service calls. For example, on January 31st 2017, GitLab experienced an incident that, while primarily related to data deletion, also demonstrated a form of self-inflicted traffic surge. While attempting to restore from a backup, a misconfiguration in the application caused a rapid increase in requests to the database. The lessons learned included testing configurations in an environment that mimics production, robust alerting and monitoring, clear understanding of interactions between components.
  • Challenges: Can overload services, causing system errors, degradation, or even failure.

31. Lack of Phased Deployment:

  • Cause: Releasing changes to all instances simultaneously without gradual rollout. For example, on August 1, 2012, Knight Capital deployed new software to a production environment. The software was untested in this particular environment, and an old, incompatible module was accidentally activated. The software was deployed to all servers simultaneously instead of gradually rolling it out to observe potential issues. The incorrect software operation caused Knight Capital to accumulate a massive unintended position in the market, resulting in a loss of over $440 million and a significant impact to its reputation. The lessons learned included phased deployment, thorough testing and understanding dependencies.
  • Challenges: Increases the risk of widespread system errors or failures if a newly introduced fault is triggered.

32. Broken Rollback Mechanisms:

  • Cause: Inability to revert to a previous stable state due to faulty rollback procedures. For example, a microservice tries to deploy a new version but after the deployment, issues are detected, and the decision is made to rollback. However, the rollback process fails, exacerbating the problem and leading to an extended outage.
  • Challenges: Can exacerbate system errors or failures during an incident, as recovery options are limited.

33. Inappropriate Timing:

  • Cause: Deploying new changes during critical periods such as Black Friday. For example, on Black Friday in 2014, Best Buy’s website experienced multiple outages throughout the day, which was caused by some maintenance or deployment actions that coincided with the traffic surge. Best Buy took the site down intermittently to address the issues, which, while necessary, only exacerbated the outage durations for customers. The lessons learned included avoiding deployments on critical days, better capacity planning and employing rollback strategies.
  • Challenges: Deploying significant changes or conducting maintenance during high-traffic or critical periods can lead to catastrophic failures.

The myriad potential challenges in microservice architecture reflect the complexity and diversity of factors that must be considered in design, development, deployment, and operation. By recognizing and addressing these causes proactively through robust practices, thorough testing, careful planning, and vigilant monitoring, teams can greatly enhance the resilience, reliability, and robustness of their microservice-based systems.

Incident Metrics

In order to prevent common causes of service faults and errors, Microservice environment can track following metrics:

1. MTBF (Mean Time Between Failures):

  • Prevent: By analyzing MTBF, you can identify patterns in system failures and proactively address underlying issues to enhance stability.
  • Detect: Monitoring changes in MTBF may help in early detection of emerging problems or degradation in system health.
  • Resolve: Understanding MTBF can guide investments in redundancy and failover mechanisms to ensure continuous service even when individual components fail.

2. MTTR (Mean Time to Repair):

  • Prevent: Reducing MTTR often involves improving procedures and tools for diagnosing and fixing issues, which also aids in preventing failures by addressing underlying faults more efficiently.
  • Detect: A sudden increase in MTTR can signal that something has changed within the system, such as a new fault that’s harder to diagnose, triggering a deeper investigation.
  • Resolve: Lowering MTTR directly improves recovery by minimizing the time it takes to restore service after a failure. This can be done through automation, streamlined procedures, and robust rollback strategies.

3. MTTA (Mean Time to Acknowledge):

  • Prevent: While MTTA mainly focuses on response times, reducing it can foster a more responsive monitoring environment, helping to catch issues before they escalate.
  • Detect: A robust monitoring system that allows for quick acknowledgment can speed up the detection of failures or potential failures.
  • Resolve: Faster acknowledgment of issues means quicker initiation of resolution processes, which can help in restoring the service promptly.

4. MTTF (Mean Time to Failure):

  • Prevent: MTTF provides insights into the expected lifetime of a system or component. Regular maintenance, monitoring, and replacement aligned with MTTF predictions can prevent unexpected failures.
  • Detect: Changes in MTTF patterns can provide early warnings of potential failure, allowing for pre-emptive action.
  • Resolve: While MTTF doesn’t directly correlate with resolution, understanding it helps in planning failover strategies and ensuring that backups or redundancies are in place for anticipated failures.

Implementing These Metrics:

Utilizing these metrics in a microservices environment requires:

  • Comprehensive Monitoring: Continual monitoring of each microservice to gather data.
  • Alerting and Automation: Implementing automated alerts and actions based on these metrics to ensure immediate response.
  • Regular Review and Analysis: Periodic analysis to derive insights and make necessary adjustments to both the system and the process.
  • Integration with Incident Management: Linking these metrics with incident management to streamline detection and resolution.

By monitoring these metrics and integrating them into the daily operations, incident management, and continuous improvement processes, organizations can build more robust microservice architectures capable of preventing, detecting, and resolving failures efficiently.

Development Procedures

A well-defined process is essential for managing the complexities of microservices architecture, especially when it comes to preventing, detecting, and resolving failures. This process typically covers various stages, from setting up monitoring and alerts to handling incidents, troubleshooting, escalation, recovery, communication, and continuous improvement. Here’s how such a process can be designed, including specific steps to follow when an alarm is received about the health of a service:

1. Preventing Failures:

  • Standardizing Development Practices: Creating coding standards, using automated testing, enforcing security guidelines, etc.
  • Implementing Monitoring and Alerting: Setting up monitoring for key performance indicators and establishing alert thresholds.
  • Regular Maintenance and Health Checks: Scheduling periodic maintenance, updates, and health checks to ensure smooth operation.
  • Operational Checklists: Maintaining a checklist for operational readiness such as:
    • Review requirements, API specifications, test plans and rollback plans.
    • Review logging, monitoring, alarms, throttling, feature flags, and other key configurations.
    • Document and understand components of a microservice and its dependencies.
    • Define key operational and business metrics for the microservice and setup a dashboard to monitor health metrics.
    • Review authentication, authorization and security impact for the service.
    • Review data privacy, archival and retention policies.
    • Define failure scenarios and impact to other services and customers.
    • Document capacity planning for scalability, redundancy to eliminate single point of failures and failover strategies.

2. Detecting Failures:

  • Real-time Monitoring: Constantly watching system metrics to detect anomalies.
  • Automated Alerting: Implementing automated alerts that notify relevant teams when an anomaly or failure is detected.

3. Responding to Alarms and Troubleshooting:

When an alarm is received:

  • Acknowledge the Alert: Confirm the reception of the alert and log the incident.
  • Initial Diagnosis: Quickly assess the scope, impact, and potential cause of the issue.
  • Troubleshooting: Follow a systematic approach to narrow down the root cause, using tools, logs, and predefined troubleshooting guides.
  • Escalation (if needed): If the issue cannot be resolved promptly, escalate to higher-level teams or experts, providing all necessary information.

4. Recovery and Mitigation:

  • Implement Immediate Mitigation: Apply temporary fixes to minimize customer impact.
  • Recovery Actions: Execute recovery plans, which might include restarting services, reallocating resources, etc.
  • Rollback (if needed): If a recent change caused the failure, initiate a rollback to a stable version, following predefined rollback procedures.

5. Communication:

  • Internal Communication: Keep all relevant internal stakeholders informed about the status, actions taken, and expected resolution time.
  • Communication with Customers: If the incident affects customers, communicate transparently about the issue, expected resolution time, and any necessary actions they need to take.

6. Post-Incident Activities:

  • Post-mortem Analysis: Conduct a detailed analysis of the incident, identify lessons learned, and update procedures as needed.
  • Continuous Improvement: Regularly review and update the process, including the alarm response and troubleshooting guides, based on new insights and changes in the system.

A well-defined process for microservices not only provides clear guidelines on development and preventive measures but also includes detailed steps for responding to alarms, troubleshooting, escalation, recovery, and communication. Such a process ensures that the team is prepared and aligned when issues arise, enabling rapid response, minimizing customer impact, and fostering continuous learning and improvement.

Post-Mortem Analysis

When a failure or an incident occurs in a microservice, the development team will need to follow a post-mortem process for analyzing and evaluating an incident or failure. Here’s how post-mortems help enhance fault tolerance:

1. Understanding Root Causes:

A post-mortem helps identify the root cause of a failure, not just the superficial symptoms. By using techniques like the “5 Whys,” teams can delve deep into the underlying issues that led to the fault, such as coding errors, network latency, or configuration mishaps.

2. Assessing Impact and Contributing Factors:

Post-mortems enable the evaluation of the full scope of the incident, including customer impact, affected components, and contributing factors like environmental variations. This comprehensive view allows for targeted improvements.

3. Learning from Failures:

By documenting what went wrong and what went right during an incident, post-mortems facilitate organizational learning. This includes understanding the sequence of events, team response effectiveness, tools and processes used, and overall system resilience.

4. Developing Actionable Insights:

Post-mortems result in specific, actionable recommendations to enhance system reliability and fault tolerance. This could involve code refactoring, infrastructure upgrades, or adjustments to monitoring and alerting.

5. Improving Monitoring and Alerting:

Insights from post-mortems can be used to fine-tune monitoring and alerting systems, making them more responsive to specific failure patterns. This enhances early detection and allows quicker response to potential faults.

6. Fostering a Culture of Continuous Improvement:

Post-mortems encourage a blame-free culture focused on continuous improvement. By treating failures as opportunities for growth, teams become more collaborative and proactive in enhancing system resilience.

7. Enhancing Documentation and Knowledge Sharing:

The documentation produced through post-mortems is a valuable resource for the entire organization. It can be referred to when similar incidents occur, or during the onboarding of new team members, fostering a shared understanding of system behavior and best practices.

Conclusion

The complexity and interdependent nature of microservice architecture introduce specific challenges in terms of management, communication, security, and fault handling. By adopting robust measures for prevention, detection, and recovery, along with adhering to development best practices and learning from post-mortems, organizations can significantly enhance the fault tolerance and resilience of their microservices. A well-defined, comprehensive approach that integrates all these aspects ensures a more robust, flexible, and responsive system, capable of adapting and growing with evolving demands.

June 24, 2023

Systems Performance

Filed under: Computing,Technology — admin @ 3:33 pm

This blog summarizes learnings from Systems Performance by Brendan Gregg that covers concepts, tools, and performance tuning for operating systems and applications.

1. Introduction

This chapter introduces Systems Performance including all major software and hardware components with goals to improve the end-user experience by reducing latency and computing cost. The author describes challenges for performance engineering such as subjectivity without clear goals, complexity that requires holistic approach, multiple causes, and multiple performance issues. The author defines key concepts for performance such as:

  • Latency: measurement of time spent waiting and it allows maximum speedup to be estimated.
  • Observability: refers to understanding a system through observation and includes that use counters, profiling, and tracing. This relies on counters, statistics, and metrics, which can be used to trigger alerts by the monitoring software. The profiling performs sampling to paint the picture of target. Tracing is event-based recording, where event data is captured and saved for later analysis using static instrumentation or dynamic instrumentation. The latest dynamic tools use BPF (Berkley Packet Filter) to build dynamic tracing, which is also referred as eBPF.
  • Experimentation tools: benchmark tools that test a specific component, e.g. following example performs a TCP network throughput:
iperf -c 192.168.1.101 -i 1 -t 10

The chapter also describes common Linux tools for analysis such as:

  • dmesg -T | tail
  • vmstat -SM 1
  • mpstat -P ALL 1
  • pidstat 1
  • iostat -sxz 1
  • free -m
  • sar -n DEV 1
  • sar -n TCP,ETCP 1

2. Methodologies

This chapter introduces common performance concepts such as IOPS, throughput, response-time, latency, utilization, saturation, bottleneck, workload, and cache. It defines following models of system performance like System under Test and Queuing System shown below:

System Under Test
Queuing System

The chapter continues with defining concepts such as latency, time-scales, trade-offs, tuning efforts, load vs architecture, scalability, and metrics. It defines utilization based on time as:

U = B / T
where U = utilization, B = total-busy-time, T = observation period

and in terms of capacity, e.g.

U = % used capacity

The chapter defines saturation as the degree to which a resource has queued work it cannot service and defines caching considerations to improve performance such as hit ratio, cold-cache, warm-cache, and hot-cache.

The author then describes analysis perspectives and defines resource analysis that begins with analysis of the system resources for investigating performance issues and capacity planning, and workload analysis that examines the performance of application for identifying and confirming latency issues. Next, author describes following methodologies for analysis:

  • Streetlight Anti-Method: It is an absence of a deliberate methodology and user analyzes a familiar tool but can be a hit or miss.
  • Random Change Anti-Method: In this approach, user randomly guesses where the problems may be and then changes things until it goes away.
  • Blame-Someone-Else Anti-Method: In this approach, user blames someone else and redirects the issue to another team.
  • Ad Hoc Checklist Method: It’s a common methodology where a user uses an ad hoc list built from recent experience.
  • Problem Statement: It defines the problem statement based on if there has been a performance issue; what was changed recently; who are being affected; etc.
  • Scientific Method: This approach is summarized as: Question -> Hypothesis -> Prediction -> Test -> Analysis.
  • Diagnostic Cycle: This is defined as hypothesis -> instrumentation -> data -> hypothesis.
  • Tools Method: This method lists available performance tools; gather metrics from each tool; and then interpret the metrics collected.
  • Utilization, Saturation, and Errors (USE) Method: This method focuses on system resources and checks utilization, saturation and errors for each resource.
  • RED Method: This approach checks request rate, errors, and duration for every service.
  • Workload Characteristics: This approach answers questions about Who is causing the load; why is the load being called; what are the load characteristics?
  • Drill-Down Analysis: This approach defines three-stage drill-down analysis methodology for system resource: Monitoring, Identification, Analysis.
  • Latency Analysis: This approach examines the time taken to complete an operation and then breaks it into smaller components, continuing to subdivide the components.
  • Method R: This methodology developed for Oracle database that focuses on finding the origin of latency.

Modeling

The chapter then defines analytical modeling of a system using following techniques:

  • Enterprise vs Cloud
  • Visual Identification uses graphs to identify patterns for linear scalability, contention, coherence (propagation of changes), knee-point (where performance stops scaling linearly), scalability ceiling.
  • Admdahl’s Law of Scalability describes content for the serial resource:
C(N) = N / (1 + a(N - 1))
where C(N) is relative capacity, N is scaling dimension such as number of CPU, and a is degree of seriality.
  • Universal Scalability Law is described as:
C(N) = N / (1 + a(N - 1) + bN(N - 1))
where b is the coherence parameter and when b = 0, it becomes Amdahl's Law
  • Queuing Theory describes Little’s Law as:
L = lambda * W
where L is average number of requests in the system, lambda is average arrival rate, and W is average request time.
Queuing System

Capacity Planning

This section describes Capacity Planning for examining how system will handle load and will scale as load scales. It searches for a resource that will become the bottleneck under load including hardware and software components. It then applies factor analysis to determine what factors to change to achieve the desired performance.

Statistics

This section reviews how to use statistics for analysis such as:

  • Quantifying Performance Gains: using observation-based and experimentation-based techniques to compare performance improvements.
  • Averages: including geometric-mean (nth root of multiplied values), harmonic-mean (count of values divided by sum of their reciprocals), average over time, decayed average (recent time weighed more).
  • Standard Deviation, Percentile, Median
  • Coefficient of Variations
  • Multimodal Distribution

Monitoring

Monitoring records performance statistics over time for comparison and identification using various time-based patterns such as hourly, daily, weekly, and yearly.

Visualization

This section examines various visualization graphs such as line-chart, scatter-plots, heat-maps, timeline-charts, and surface plot.

3. Operating Systems

This chapter examines operating system and kernel for system performance analysis and defines concepts such as:

Background

Kernel

Kernel is the core of operating system and though Linux and BSED have a monolithic kernel but other kernel models include micokernels, unikernels and hybrid kernels. In addition, new Linux versions include extended BPF for enabling secure kernel-mode applications.

Kernel and User Modes

The kernel runs in a kernel mode to access devices and execution of the privileged instructions. User applications run in a user mode where they can request privileged operations using system calls such as ioctl, mmap, brk, and futex.

Interrupts

An interrupt is a single to the processor that some event has occurred that needs processing, and interrupts the current execution of the processor and runs interrupt service routine to handle the event. The interrupts can be asynchronous for handling interrupt service requests (IRQ) from hardware devices or synchronous generated by software instructions such as traps, exceptions, and faults.

Clocks and Idle

In old kernel implementations, tick latency and tick overhead caused some performance issues but modern implementations have moved much functionality out of the clock routine to on-demand interrupts to create tickless kernel for improving power efficiency.

Processes

A process is an environment for executing a user-level program and consists of memory address space, file descriptors, thread stacks, and registers. A process contains one or more threads, where each thread has a stack, registers and an instruction pointer (PC). Processes are normally created using fork system call that wraps around clone and exec/execve system call.

Stack

A stack is a memory storage area for temporary data, organized as last-in, first-out (FIFO) list. It is used to store the return address when calling a function and passing parameters to function, which is also referred as a stack frame. The call path can be seen by examining saved return addresses across all the stack frames, which is called stack trace or stack back trace. While executing a system call, a process thread has two stacks: a user-level stack and a kernel-level stack.

Virtual Memory

Virtual memory is an abstraction of main memory, providing processes and the kernel with their own private view of main memory. It supports multitasking of threads and over-subscription of main memory. The kernel manages memory using process-swapping and paging schemes.

Schedulers

The scheduler schedules processes on processors by dividing CPU time among the active processes and threads. The scheduler tracks all threads in the read-to-run state in run priority-queues where process priority can be modified to improve performance of the workload. Workloads are categorized as either CPU-bound or I/O bound and scheduler may decrease the priority of CPU-bound processes to allow I/O-bound workloads to run sooner.

File Systems

File systems are an organization of data as files and directories. The virtual file system (VFS) abstracts file system types so that multiple file systems may coexist.

Kernels

This section discusses Unix-like kernel implementations with a focus on performance such as Unix, BSD, and Solaris. In the context of Linux, it describes systemd, which is a commonly used service manager to replace original UNIX init system and extended BPF that can be used for networking, observability, and security. BPF programs run in kernel model and are configured to run on events like USDT probes, kprobes, uprobes, and perf_events.

4. Observability Tools

This chapter identifies static performance and crises tools including their overhead like counters, profiling, and tracing.

Tools Coverage

This section describes static performance tools like sysctl, dmesg, lsblk, mdadm, ldd, tc, etc., and crisis tools like vmstat, ps, dmesg, lscpu, iostat, mpstat, pidstat, sar, and more.

Tools Type

The observability tool can categorized as a system-wide, per-process observability or counters/events based, e.g. top shows system-wide summary; ps, pmap are maintained per-process; and profilers/tracers are event-based tools. Kernel maintains various counters that are incremented when events occur such as network packets received, disk I/O, etc. Profiling collects a set of samples such as CPU usage at fixed-rate or based on untimed hardware events, e.g., perf, profile are system-wide profilers, and gprof and cachegrind are per-process profilers. Tracing instruments occurrence of an event, and can store event-based details for later analysis. Examples of system-wide tracing tools include tcpdump, biosnoop, execsnoop, perf, ftrace and bpftrace, and examples of per-process tracing tools include strace and gdb. Monitoring records statistics continuously for later analysis, e.g. sar, prometheus, collectd are common tools for monitoring.

Observability Sources

The main sources of system performance include /proc and /sys. The /proc is a file system for kernel statistics and is created dynamically by the kernel. For example, ls -F /proc/123 lists per-process statistics for process with PID 123 like limits, maps, sched, smaps, stat, status, cgroup and task. Similarly, ls -Fd /proc/[a-z]* lists system-wide statistics like cpuinfo, diskstats, loadavg, meminfo, schedstats, and zoneinfo. The /sys was originally designed for device-driver statistics but has been extended for other types of statistics, e.g. find /sys/devices/system/cpu/cpu0 -type f provides information about CPU caches. Tracepoints are Linux kernel event source based on static instrumentation and provide insight into kernel behavior. For example, perf list tracepoint lists available tracepoints and perf trace -e block:block_rq_issue traces events. Tracing tools can also use tracepoints, e.g., strace -e openat ~/iosnoop and strace -e perf_event_open ~/biosnoop. A kernel event source based on dynamic instrumentation includes kprobes that can trace entry to functions and instructions, e.g. bpftrace -e 'kprobe:do_nanosleep { printf("sleep %s\n", comm); }'. A user-space event-source for dynamic instrumentation includes uprobes, e.g., bpftrace -l 'uprob:/bin/bash:*'. User-level statically-defined tracing (UsDT) is the user-space version of tracepoint and some libraries/apps have added USDt probes, e.g., bpftrace -lv 'usdt:/openjdk/libjvm.so:*'. Hardware counters (PMC) are used for observing activity by devices, e.g. perf stat gzip words instruments the architectural PMCs.

Sar

Sar is a key monitoring tool that is provided via the sysstat package, e.g., sar -u -n TCP,ETCP reports CPU and TCP statistics.

5. Applications

This chapter describes performance tuning objectives, application basics, fundamentals for application performance, and strategies for application performance analysis.

Application Basics

This section defines performance goals including lowering latency, increasing throughput, improving resource utilization, and lowering computing costs. Some companies use a target application performance index (ApDex) as an objective and as a metric to monitor:

Apdex = (satisfactory + 0.5 x tolerable + 0 x frustrating) / total-events

Application Performance Techniques

This section describes common techniques for improving application performance such as increasing I/O size to improve throughput, caching results of commonly performed operations, using ring buffer for continuous transfer between components, and using event notifications instead of polling. This section describes concurrency for loading multiple runnable programs and their execution that may overlap, and recommends parallelism via multiple processes or threads to take advantage of multiprocessor systems. These multiprocess or multithreaded applications use CPU scheduler with the cost of context-switch overhead. Alternatively, use-mode applications may implement their own scheduling mechanisms like fibers (lightweight threads), co-routines (more lightweight than fiber, e.g. Goroutine in Golang), and event-based concurrency (as in Node.js). The common models of user-mode multithreaded programming are: using service thread-pool, CPU thread-pool, and staged event-driven architecture (SEDA). In order to protect integrity of shared memory space when accessing from multiple threads, applications can use mutex, spin locks, RW locks, and semaphores. The implementation of these synchronization primitives may use fastpath (using cmpxchg to set owner), midpath (optimistic spinning), slowpath (blocks and deschedules thread), or read-copy-update (RCU) mechanisms based on the concurrency use-cases. In order to avoid the cost of creation and destruction of mutex-locks, the implementations may use hashtable to store a set of mutex locks instead of using a global mutex lock for all data structures or a mutex lock for every data structure. Further, non-blocking allows issuing I/O operations asynchronously without blocking the thread using O_ASYNC flag in open, io_submit, sendfile and io_uring_enter.

Programming Languages

This section describes compiled languages, compiler optimization flags, interpreted languages, virtual machines, and garbage collection.

Methodology

This section describes methodologies for application analysis and tuning using:

  • CPU profiling and visualizing via CPU flame graphs.
  • off-CPU analysis using sampling, scheduler tracing, and application instrumentation, which may be difficult to interpret due to wait time in the flame graphs so zooming or kernel-filtering may be required.
  • Syscall analysis can be instrumented to study resource based performance issues where the target for syscall analysis include new process tracing, I/O profiling, and kernel time analysis.
  • USE method
  • Thread state analysis like user, kernel, runnable, swapping, disk I/O, net I/O, sleeping, lock, idle.
  • Lock analysis
  • Static performance tuning
  • Distributed tracing

Observability Tools

This section introduces application performance observability tools:

  • perf is standard Linux profiler with many uses such as:
    • CPU Profiling: perf record -F 49 -a -g -- sleep 30 && per script --header > out.stack
    • CPU Flagme Graphs: ./stackcollapse-perf.pl < out.stacks | ./flamgraph.ol --hash > out.svg
    • Syscall Tracing: perf trace -p $(pgrep mysqld)
    • Kernel Time Analysis: perf trace -s -p $(pgrep mysqld)
  • porfile is timer-based CPU profiler from BCC, e.g.,
    • profile -F 49 10
  • offcputime and bpftrace to summarize time spent by thrads blocked and off-CPU, e.g.,
    • offcputime 5
  • strace is the Linux system call tracer
    • strace -ttt -T -p 123
  • execsnoop traces new process execution system-wide.
  • syscount to count system call system wide.
  • bpftrace is a BPF-based tracer for high-level programming languages, e.g.,
    • bpftrace -e 't:syscalls:sys_enter_kill { time("%H:%M:%S "); }'

6. CPU

This chapter provides basis for CPU analysis such as:

Models

This section describes CPU architecture and memory caches like CPU registers, L1, L2 and L3. It then describes run-queue that queue software threads that are ready to run and time spent waiting on CPU run-queue is called run-queue latency or dispatcher-queue latency.

Concepts

This section describes concepts regarding CPU performance including:

  • Clock Rate: Each CPU instruction may take one or more cycles of the clock to execute.
  • Instructions: CPU execute instructions chosen from their instruction set.
  • Instruction Pipeline: This allows multiple instructions in parallel by executing different components of different instructions at the same time. Modern processors may implement branch prediction to perform out-of-order execution of the pipeline.
  • Instruction Width: Superscalar CPU architecture allows more instructions can make progress with each clock cycle based on the width of instruction.
  • SMT: Simultaneous multithreading makes use of a superscalar architecture and hardware multi-threading support to improve parallelism.
  • IPC, CP: Instructions per cycle ((IPC) describe how CPU is spending its clock cycles.
  • Utilization: CPU utilization is measured by the time a CPU instance is busy performing work during an interval.
  • User Time/Kernel Time: The CPU time spent executing user-level software is called user time, and kernel-time software is kernel time.
  • Saturation: A CPU at 100% utilization is saturated, and threads will encounter scheduler lateny as they wait to run on CPU.
  • Priority Inversion: It occurs when a lower-priority threshold holds a resource and blocks a high priority thread from running.
  • Multiprocess, Multithreading: Multithreading is generally considered superior.

Architecture

This section describes CPU architecture and implementation:

Hardware

CPU hardware include processor and its subsystems:

  • Processor: The processor components include P-cache (prefetch-cache), W-cache (write-cache), Clock, Timestamp counter, Microcode ROM, Temperature sensor, and network interfaces.
  • P-States and C-States: The advanced configuration and power interface (ACPI) defines P-states, which provides different levels of performance during execution, and C-states, which provides different idle states for when execution is halted, saving power.
  • CPU caches: This include levels of caches like level-1 instruction cache, level-1 data cache, translation lookaside buffer (TLB), level-2 cache, and level-3 cache. Multiple levels of cache are used to deliver the optimum configuration of size and latency.
  • Associativity: It describes constraint for locating new entries like full-associative (e.g. LRU), direct-mapped where each entry has only one valid location in cache, set-associative where a subset of the cache is identified by mapping, e.g., four-way associative maps an address to four possible location.
  • Cache Line: Cache line size is a range of bytes that are stored and transferred as a unit.
  • Cache Coherence: Cache coherence ensures that CPUs are always accessing the correct state of memory.
  • MMU: The memory management unit (MMU) is responsible for virtual-to-physical address translation.
  • Hardware Counters (PMC): PMCs are processor registers implemented in hardware and include CPU cycles, CPU instructions, Level 1, 2, 3 cache accesses, floating-point unit, memory I/O, and resource I/O.
  • GPU: GPU support graphical displays.
  • Software: Kernel software include scheduler that performs time-sharing, preemption, and load balancing. The scheduler uses scheduling classes to manage the behavior of runnable threads like priorities and scheduling policies. The scheduling classes for Linux kernels include RT (fixed and high-priorities for real-time workloads), O(1) for reduced latency, CFS (Completely fair scheduling), Idle, and Deadline. Scheduler policies include RR (round-robin), FIFO, NORMAL, BATCH, IDLE, and DEADLINE.

Methodology

This section describes various methodologies for CPU analysis and tuning such as:

  • Tools Method: iterate over available tools like uptime, vmstat, mpstat, perf/profile, showboost/turboost, and dmesg.
  • USE Method: It checks for utilization, saturation, and errors for each CPU.
  • Workload Charcterization like CPU load average, user-time to system-time ratio, syscall rate, voluntary context switch rate, and interrupt rate.
  • Profiling: CPU profiler can be performed by time-based sampling or function tracing.
  • Cycle Analysis: Using performance monitor counter ((PMC) to understand CPU utilization at the cycle level.
  • Performance Monitoring: identifies active issues and patterns over time using metrics for CPU like utilization and saturation.
  • Static Performance Tuning
  • Priority Tuning
  • CPU Binding

Observability Tools

This section introduces CPU performance observability tools such as:

  • uptime
  • load average – exponentially damped moving average for load including current resource usage plus queued requests (saturation).
  • pressure stall information (PSI)
  • vmstat, e.g. vmstat 1
  • mpstat, e.g. mpstat -P ALL 1
  • sar
  • ps
  • top
  • pidstat
  • time, ptime
  • turbostat
  • showboost
  • pmcash
  • tlbstat
  • perf, e.g., perf record -F 99 command, perf stat gzip ubuntu.iso, and perf stat -a -- sleep 10
  • profile
  • cpudist, e.g., cpudist 10 1
  • runqlat, e.g., runqlat 10 1
  • runqlen, e.g., runqlen 10 1
  • softirqs, e.g., softirqs 10 1
  • hardirqs, e.g., hardirqs 10 1
  • bpftrace, e.g., bpftrace -l 'tracepoint:sched:*'

Visualization

This section introduces CPU utilization heat maps, CPU subsecond-offset heat maps, flame graphs, and FlameScope.

Tuning

The tuning may use scheduling priority, power stats, and CPU binding.

7. Memory

This chapter provides basis for memory analysis including background information on key concepts and architecture of hardware and software memory.

Concepts

Virtual Memory

Virtual memory is an abstraction that provides each process its own private address space. The process address space is mapped by the virtual memory subsystem to main memory and the physical swap device.

Paging

Paging is the movement of pages in and out of main memory. File system paging (good paging) is caused by the reading and writing of pages in memory-mapped files. Anonymous paging (bad paging) involves data that is private to process: the process heap and stacks.

Demand Paging

Demand paging map pages of virtual memory to physical memory on demand and defers the CPU overhead of creating the mapping until they are needed.

Utilization and Saturation

If demands for the memory exceeds the amount of main memory, main memory becomes saturated and operating system may employ paging or OOM killer to free it.

Architecture

This section introduces memory architecture:

Hardware Main Memory

The common type of the main memory is dynamic random-access memory (DRAM) and column address strobe (CAS) latency for DDR4 is around 10-20ns. The main memory architecture can be uniform memory access (UMA) or non-uniform memory access (NUMA). The main memory may use a shared system-bus to connect a single or multiprocessors, directly attached memory, or interconnected memory bus. The MMU (memory management unit) translates virtual addresses to physical addresses for each page and offset within a page. The MMU uses a TLB (translation lookaside buffer) as a first-level cache for addresses in the page tables.

Software

The kernel tracks free memory in free list of pages that are available for immediate allocation. The kernel may use swapping, reap any memory that can be freed, or use OOM killer to free memory when memory is low.

Process Virtual Address Space

The process virtual address space is a range of virtual pages that are mapped to physical pages and addresses are split into segments like executable text, executable data, heap, and stack. There are a variety of user- and kernel-level allocators for memory with simple APIs (malloc/free), effcient memory usage, performance, and observability.

Methodology

This section describes various methodlogies for memory analysis:

  • Tools Method: involves checking page scanning, pressure stall information (PSI), swapping, vmstat, OOM killer, and perf.
  • USE Method: utilization of memory, the degree of page scanning, swapping, OOM killer, and hardware errors.
  • Characterizing usage
  • Performance monitoring
  • Leak detection
  • Static performance tuning

Observability Tools

This section includes memory observability tools including:

  • vmstat
  • PSI, e.g., cat /proc/pressure/memory
  • swapon
  • sar
  • slabtop, e.g., slabtop -sc
  • numastat
  • ps
  • pmap, e.g. pmap -x 123
  • perf
  • drsnoop
  • wss
  • bpftrace

Tuning

This section describes tunable parameters for Linux kernels such as:

  • vm.dirty_background_bytes
  • vm.dirty_ratio
  • kernel.numa_balancing

8. File Systems

This chapter provides basic for file system analysis:

Models

  • File System Interfaces: File system interfaces include read, write, open, and more.
  • File System Cache: The file system cache may cache reads or buffer writes.
  • Second-Level Cache: It can be any memory type like level-1, level2, RAM, and disk.

Concepts

  • File System Latency: primary metric of file system performance for time spent in the file system and disk I/O subsystem.
  • Cache: The file system will use main memory as a cache to improve performance.
  • Random vs Sequential I/O: A series of logical file system I/O can be either random or sequential based on the file offset of each I/O.
  • Prefetch/Read-Ahead: Prefetch detects sequential read workload and issue disk reads before the application request it.
  • Write-Back Caching: It marks write completed after transfering to main memory and writes to disk asynchronously.
  • Synchronous Writes: using O_SYNC, O_DSYNC or O_RSYNC flags.
  • Raw and Direct I/O
  • Non-Blocking I/O
  • Memory-Mapped Files
  • Metadata including information about logical and physical that is read and written to the file system.
  • Logical vs Physical I/O
  • Access Timestamps
  • Capacity

Architecture

This section introduces generic and specific file system architecture:

  • File System I/O Stack
  • VFS (virtual file system) common interface for different file system types.
  • File System Caches
    • Buffer Cache
    • Page Cache
    • Dentry Cache
    • Inode Cache
  • File System Features
    • Block (fixed-size) vs Extent (pre-allocated contiguous space)
    • Journaling
    • Copy-On-Write
    • Scrubbing
  • File System Types
    • FFS – Berkley fast file system
    • ext3/ext4
    • XFS
    • ZFS
  • Volume and Pools

Methodology

This section describes various methodologies for file system analysis and tuning:

  • Disk Analysis
  • Latency Analysis
    • Transaction Cost
  • Workload Characterization
    • cache hit ratio
    • cache capacity and utilization
    • distribution of I/O arrival times
    • errors
  • Performance monitoring (operation rate and latency)
  • Static Performance Tuning
  • Cache Tuning
  • Workload Separation
  • Micro-Benchmaking
    • Operation types (read/write)
    • I/O size
    • File offset pattern
    • Write type
    • Working set size
    • Concurency
    • Memory mapping
    • Cache state
    • Tuning

Observability Tools

  • mount
  • free
  • vmstat
  • sar
  • slabtop, e.g., slabtop -a
  • strace, e.g., strace -ttT -p 123
  • fatrace
  • opensnoop, e.g., opensnoop -T
  • filetop
  • cachestat, e.g., cachestat -T 1
  • bpftrace

9. Disks

This chapter provides basis for disk I/O analysis. The parts are as follows:

Models

  • Simple Disk: includes an on-disk queue for I/O requests
  • Caching Disk: on-disk cache
  • Controller: HBA (host-bus adapter) bridges CPU I/O transport with the storage transport and attached disk devices.

Concepts

  • Measuring Time: I/O request time = I/O wait-time + I/O service-time
    • disk service time = utilization / IOPS
  • Time Scales
  • Caching
  • Random vs Sequential I/O
  • Read/Write Ratio
  • I/O size
  • Utilization
  • Saturation
  • I/O Wait
  • Synchronous vs Asynchronous
  • Disk vs Application I/O

Architecture

  • Disk Types
    • Magnetic Rotational
      • max throughput = max sector per track x sector-size x rpm / 60 s
      • Short-Stroking
      • Sector Zoning
      • On-Disk Cache
    • Solid-State Drives
      • Flash Memory
      • Persistent Memory
  • Interfaces
    • SCSI
    • SAS
    • SATA
    • NVMe
  • Storage Type
    • Disk Devices
    • RAID
  • Operating System Disk I/O Stack

Methodology

  • Tools Method
    • iostat
    • iotop/biotop
    • biolatency
    • biosnoop
  • USE Method
  • Performance Monitoring
  • Workload Characterization
    • I/O rate
    • I/O throughput
    • I/O size
    • Read/write ratio
    • Random vs sequential
  • Latency Analysis
  • Static Performance Tuning
  • Cache Tuning
  • Micro-Benchmarking
  • Scaling

Observability Tools

  • iostat
  • pressure stall information (PSI)
  • perf
  • biolatency, e.g., biolatency 10 1
  • biosnoop
  • biotop
  • ioping

10. Network

This chapter introduces network analysis. The parts are as follows:

Models

  • Network Interface
  • Controller
  • Protocol Stack
    • TCP/IP
    • OSI Model

Concepts

  • Network and Routing
  • Protocols
  • Encapsulation
  • Packet Size
  • Latency
    • Connection Latency
    • First-Byte Latency
    • Round-Trip Time
  • Buffering
  • Connection Backlog
  • Congestion Avoidance
  • Utilization
  • Local Connection

Architecture

  • Protocols
  • IP
  • TCP
    • Sliding Window
    • Congestion avoidance
    • TCP SYN cookies
    • 3-way Handshake
    • Duplicate Ack Detection
    • Congestion Controls
    • Nagle algorithm
    • Delayed Acks
  • UDP
    • QUIC and HTTP/3
  • Hardware
    • Interfaces
    • Controller
    • Switches and Routers
    • Firewalls
  • Software
    • Network Stack
  • Linux Stack
    • TCP Connection queues
    • TCP Buffering
    • Segmentation Offload: GSO and TSO
    • Network Device Drivers
    • CPU Scaling
    • Kernel Bypass

Methodology

  • Tools Method
    • netstat -s
    • ip -s link
    • ss -tiepm
    • nicstat
    • tcplife
    • tcptop
    • tcpdump
  • USE Method
  • Workload Characterization
    • Network interface throughput
    • Network interface IOPS
    • TCP connection rate
  • Latency Analysis
  • Performance Monitoring
    • Throughput
    • Connections
    • Errors
    • TCP retransmits
    • TCP out-of-order pack
  • Packet Sniffing
    • tcpdump -ni eth4
  • TCP Analysis
  • Static Performance Tuning
  • Resource Controls
  • Micro-Benchmarking

Observability Tools

  • ss, e.g., ss -tiepm
  • strace, e.g., strace -e sendmesg,recvmsg ss -t
  • ip, e.g., ip -s link
  • ifconfig
  • netstat
  • sar
  • nicstat
  • ethtool, e.g., ethtool -S eth0
  • tcplife, tcptop
  • tcpretrans
  • bpftrace
  • tcpdump
  • wireshark
  • pathchar
  • iperf
  • netperf
  • tc

Tuning

sysctl -a | grep tcp

11. Cloud Computing

This chapter introduces cloud performance analysis with following parts:

Background

  • Instance Types: m5 (general-purpose), c5 (compute optimized)
  • Scalable Architecture: horizontal scalability with load balancers, web servers, application servers, and databases.
  • Capacity Planning: Dynamic sizing (auto-scaling) using auto-scaling-group and and Scalability testing.
  • Storage: File store, block store, object store
  • Multitenancy
  • Orchestration (Kubernetes)

Hardware Virtualization

  • Type 1: execute directly on the processors using native hypervisor or bare-metal hypervisor (e.g., Xen)
  • Type 2: execute within a host OS and hypervisor is scheduled by the host kernel
  • Implementation: Xen, Hyper-V, KVM, Nitro
  • Overhead:
    • CPU overhead (binary translation, paravirtualization, hardware assisted)
    • Memory Mapping
    • Memory Size
    • I/O
    • MultiTenant Contention
    • Resource Controls
  • Resource Controls
    • CPUs – Borrowed virtual time, Simple earliest deadline, Credit based schedulers
    • CPU Caches
    • Memory Capacity
    • File System Capacity
    • Device I/O
  • Observability
    • xentop
    • perf kvm stat live
    • bpftrace -lv t:kvm:kvm_exit
    • mpstat -P ALL 1

OS Virtualization

  • Implementation: Linux supports namespaces and cgroups that are used to create containers. Kubernetes uses following architecture for Pods, Kube Proxy and CNI.
  • Namespaces
    • lsns
  • Control Groups (cgroups) limit the usage of resources
  • Overhead – CPU, Memory Mapping, Memory Sie, I/O, and Multi-Tenant Contention
  • Resource Controls – throttle access to resources so they can be shared more fairly
    • CPU
    • Shares and Bandwidth
    • CPU Cache
    • Memory Capacity
    • Swap Capacity
    • File System Capacity
    • File System Cache
    • Disk I/O
    • Network I/O
  • Observability
    • from Host
      • kubectl get pod
      • docker ps
      • kubectl top nodes
      • kubectl top pods
      • docke stats
      • cgroup stats (cpuacct.usage and cpuacct.usage_percpu)
      • system-cgtop
      • nsenter -t 123 -m -p top
      • Resource Controls (throttled time, non-voluntary context switches, idle CPU, busy, all other tenants idle)
    • from guest (container)
      • iostat -sxz 1
  • Lightweight Virtualization
  • Lightweight hypervisor based on process virtualization (Amazon Firecracker)
    • Implementation – Intel Clear, Kata, Google gVisor, Amazon Firecracker
  • Overhead
  • Resource Controls
  • Observability
    • From Host
    • From Guest
      • mpstat -P ALL 1

12. Benchmarking

This chapter discusses benchmarks and provides advice with methodologies. The parts of this chapter include:

Background

  • Reasons
    • System design
    • Proof of concept
    • Tuning
    • Development
    • Capacity planning
    • Troubleshooting
    • Marketing
  • Effective Bencharmking
    • Repeatable
    • Observable
    • Portable
    • Easily presented
    • Realistic
    • Runnable
    • Bencharmk Analysis
  • Bencharm Failures
    • Causal Bencharmking – you benchmark A, but measure B, and conclude you measured C, e.g. disk vs file system (buffering/caching may affect measurements).
    • Blind Faith
    • Numbers without Analysis – include description of the benchmark and analysis.
    • Complex Benchmark Tools
    • Testing the wrong thing
    • Ignoring the Environment (not tuning same as production)
    • Ignoring Errors
    • Ignoring Variance
    • Ignoring Perturbations
    • Changing Multiple Factors
    • Friendly Fire
  • Benchmarking Types
    • Micro-Benchmarking
    • Simulation – simulate customer application workload (macro-bencharmking)
    • Replay
    • Industry Standards – TPC, SPEC

Methodology

This section describes methologies for performing becharmking:

  • Passive Bencharmking (anti-methodology)
    • pick a BM tool
    • run with a variety of options
    • Make a slide of results and share it with management
    • Problems
      • Invalid due to software bugs
      • Limited by benchmark software (e.g., single thread)
      • Limited by a component that is unrelated to the benchmark (congested network)
      • Limited by configuration
      • Subject to perturbation
      • Benchmarking the wrong the thing entirely
  • Active Benchmarking
    • Analyze performance while benchmarking is running
    • bonie++
    • iostat -sxz 1
  • CPU Profiling
  • USE Method
  • Workload Characterization
  • Custom Benchmarks
  • Ramping Load
  • Statistical Analysis
    • Selection of benchmark tool, its configuration
    • Execution of the benchmark
    • Interpretation of the data
  • Benchmarking Checklist
    • Why not double
    • Did it beak limit
    • Did it error
    • Did it reproduce
    • Does it matter
    • Did it even happen

13. perf

This chapter introduces perf tool:

  • Subcommands Overview
    • perf record -F 99 -a -- sleep 30
  • perf Events
    • perf list
  • Hardware Events (PMCs)
    • Frequency Sampling
    • perf record -vve cycles -a sleep 1
  • Software Events
    • perf record -vve context-switches -a -- sleep
  • Tracepoint Events
    • perf record -e block:block_rq_issue -a sleep 10; perf script
  • Probe Events
    • kprobes, e.g., perf prob --add do_nanosleep
    • uprobes, e.g., perf probe -x /lib.so.6 --add fopen
    • USDT
  • perf stat
    • Interval Statistics
    • Per-CPU Balance
    • Event Filters
    • Shadow Statistics
  • perf record
    • CPU Profiling, e.g., perf record -F 99 -a -g -- sleep 30
    • Stack Walking
  • perf report
    • TUI
    • STDIO
  • perf script
    • Flame graphs
  • perf trac

12. Ftrace

This chapter introduces Ftrace tool. The sections are:

  • Capabilities
  • tracefs
    • tracefs contents, e.g., ls -F /sys/kernel/debug/tracing
  • Ftrace Function Profiler
  • Ftrace Function Tracing
  • Tracepoints
    • Filter
    • Trigger
  • kprobes
    • Event Tracing
    • Argument and Return Values
    • Filters and Triggers
  • uprobes
    • Event Tracing
    • Argument and Return Values
  • Ftrace function_graph
  • Ftrace hwlat
  • Ftrace Hist Triggers
  • perf ftrace

15. BPF

This chapter introduces BPF tool. The sections are:

  • BCC – BPF COmpiler Collection, e.g., biolatency.py -mF
  • bpftrace – open-source tracer bult upon BPF and BC
  • Programming

16. Case Study

This chapter describes the story of a real-world performance issue.

  • Problem Statement – Java application in AWS EC2 Cloud
  • Analysis Strategy
    • Checklist
    • USE method
  • Statistics
    • uptime
    • mpstat 10
  • Configuration
    • cat /proc/cpuinfo
  • PMCs
    • ./pmarch -p 123 10
  • Software Events
    • perf stat -e cs -a -I 1000
  • Tracing
    • cpudist -p 123 10 1
  • Conclusion
    • No container neighbors
    • LLC size and workload diffeence
    • CPU difference

May 28, 2023

Patterns for API Design

Filed under: API,Computing — admin @ 10:46 pm

I recently read Olaf Zimmermann’s book “Patterns for API Design” that reviews theory and practice of API design patterns. These patterns built upon earlier work of Gregor Hohpe on Enterprise Integration Patterns and Martn Fowler’s Patterns of Enterprise Application Architecture. Following is summary of these API patterns from the book:

1. Application Programming Interface (API) Fundamentals

The first chapter defines the remote API fundamentals that defines contract for the desired behavior, communication protocol, network endpoints and policies regarding failures. The chapter also surveys history of remote APIs such as TCP/IP based FTP, RPC based DCE/CORBA/RMI/gRPC, queue/messaging based, REST style, and data streams/pipelines. The authors then examines cloud native applications (CNA) and a set of principles described in IDEAL that include Isolated State, Distribution, Elasticity, Automation and Loose Coupling. These traits are then summarized as:

  • Fit for purpose
  • Rightsized and modular
  • Resilient and protected
  • Controllable and adaptable
  • Workfload-aware and resource-efficient
  • Agile and tool-supported

The authors describes how service-oriented architecture originated and evolved into Microservices that has a single responsibility within a domain-specific business capability. Microservices facilitate software reuse but also bring new challenges that include fallacies of distributed computing, data consistency and state management. These APIs should be considered as products and they may form ecosystems. API business value, visibility and lifetime help make API successful by enabling rapid integration of systems with support of the autonomy and independent evolution of those systems. The API design may differ based on:

  • One general vs many specialized endpoints
  • Fine vs coarse-grained endpoint and operation scope
  • Few operations carrying much data vs chatty interactions carrying little data
  • Data currentness vs correctness
  • Stable contracts vs fast changing ones

The authors reviews architecturally significant requirements such as understandability, information sharing vs hiding, amount of coupling, modifiability, performance & scalability, data parsimony, and security & privacy. The authors then describes developer experience as:

DX = function, stability, ease of use, and clarity

The developer experience includes quality attributes throughout the lifecycle of API such as development qualities, operational qualities and managerial qualities. The chapter ends with definition of a domain model for remote APIs that include communication participants, endpoints with contracts that describe operations, message structure, and API contract.

2. Lakeside Mutual Case Study

The chapter 2 introduces a fake Lakeside Mutual case study to illustrate API design. This chapter examines the user-stories and quality attributes for the case study, analsysis-level domain model and architecture overview. The architecture overviews describes system context and application architecture including service components. The chapter then describes an example of API specification using MDSL (Microservice Domain-Specific-Language).

3. API Decision Narratives

This chapter goes over the API design options and decisions where each decision includes criteria, alternative options and recommendations based on why-statement and architecture-decision-record format. The first pattern starts with Foundation API decisions and patterns that include:

3.1 API Visibility

This decision that is primarily managerial and organizational looks at the different visibility options such as Public API, Community API and Solution-Internal API.

3.2 API Integration

The chapter looks at the decision looks for the API integration types where an API can be integrated with backend horizontally or can be integrated with frontend and backend vertically. In the former option, backend exposes its services via a message-based remote backend integration API. In the latter option, the APIs are exposed via a message-based remote frontend integration API.

3.3 Documentation of API

The API designers need to decide if the API should have the documentation and if so, how should it be documented. For example, there are multiple standards such as OpenAPI specification (formerly known as Swagger), Web Application Description Language (WADL), Web Service Description Language and Microservice Domain-Specific Language (MDSL).

3.4 API Roles and Responsibilities

The API designers have to find an appropriate business granularity for the service and handle cohesion & coupling criteria. The drivers for this decision is to define the architectural role that an API endpoint should play and define the responsibility of each API operation. The role of an API endpoint can be Processing Resource for processing incoming commands or Information Holder Resource for storage and retrieval of data or metadata. The information holder roles can be further divided into operational/transactional short-lived data, master long-lived data for business transactions, reference long-lived data for looking up delivery status, zip codes, etc., link lookup resource to identify links to resources and data transfer resource to offer a shared data exchange between clients.

3.5 Defining Operation Responsibilities

The operation responsibilities include defining read-write characteristics of each API operation and can be categorized into Computing Function, State Creation Operation, State Transition Operation and Retrieval Operation. The Computation Function computes a result solely from the client input without reading or writing a server-side state. The State Creation Operation creates states with reliability on an write-only API endpoint. The State Transition Operation performs one or more activities, causing a server-side state change with considerations for network efficiency and data parsimony. The Retrieval Operation represents a read-only access operation to find the data.

3.6 Selecting Message Representation Patterns

The structural representation patterns deal with designing message representation structures with considerations for finding the optimal number of message parameters and semantic meaning and stereotypes of the representation elements. This structural representation needs to take four decisions: responsibility of message elements, structure of the parameter representation, exchange of context information required and meaning of stereotypes of message elements. The structure of parameter representation can be nested or flat with types: Atomic Parameter, Atomic Parameter List, Parameter Tree, and Parameter Forrest. The Atomic Parameter defines a single parameter or body element. The Atomic Parameter List aggregates multiple atomic parameters as list. The Parameter Tree defines a hierarchical structure with one or more child nodes. The Parameter Forest comprises two or more Parameter Trees. Security and privacy concerns such as data integrity and confidentiality as well as semantic proximity will determine choosing the right option for these structure types.

3.7 Element Stereotypes

The element stereotype patterns include Data Element and Metadata Element, Link Element, ID Element. Security and data privacy concerns drive Data Elements and Metadata Elements and messages may become larger if Metadata Elements are included. The unique ID Element is used to identify to API endpoints, operations, and message representation elements. The Link Element act as human and machine readable network accessible pointers to other endpoints and operations.

3.8 Governing API Quality

The quality of service (QoS) for API include reliability, performance, security and scalability. The themes of the decisions for quality governance include:

3.8.1 Identification and Authentication of the API Client

Identification, authentication and authorization are important for APIs security but they also enable measures for ensuring many other qualities.

3.8.2 API Key

The API Key identifies the client and additional signature made with the secret key, which are never transmitted. You may use OAuth 2.0, OpenID, Kerberos in combination with LDAP, CHAP, and EAP.

3.8.3 Pricing Plan

The pricing plan looks at the metering and charging for API consumption and its variants include Freemium Model, Flat-Rate Subscription, Usage-based Pricing and Market-based Pricing based on economic aspects.

3.8.4 Rate Limit

The Rate Limit safeguards against API clients that overuse the API.

3.8.5 Service Level Agreement

The service Level Agreement defines testable service-level objectives to establish a structured, quality-oriented agreement with the API product owner.

3.8.6 Error Report

The Error Report uses error codes in response message that indicate and classify the faults in a simple and machine-readable ways.

3.8.7 Context Representation

The context representation uses Metadata Elements to carry contextual information in request and response messages. It can be used to cope with the diversity of protocols for distributed applications and transport security tokens and digital signatures.

3.8.8 Pagination

The pagination divides large response data into smaller chunks and its variants include Page-Based Pagination, Cursor-Based Pagination, Offset-Based Pagination, and Time-Based Pagination. It can optionally allow filtering capabilities and pagination structure can be defined as Atomic Parameter List, or Parameter Forest or Parameter Tree.

3.8.9 Wish List and Wish Template

A Wish List allows API clients to provide desired data elements of requested resource in the request. When response contains a nested data, Wish Template can be used to specify parameters in the request message that should be included in the corresponding response message.

3.8.10 Conditional Request

This pattern makes a Conditional Request by adding Metadata Elements to the request message and the API service processes the request only the condition specified by the metadata is met.

3.8.11 Request Bundle

Request Bundle is defined as a data container that assembles multiple requests with unique identifiers in a single request message.

3.8.12 Embedded Entity

This pattern embeds a Data Element in the request or response instead of link or identifier.

3.8.13 Linked Information Holder

Linked Information Holder adds a Link Element to message that points to the API endpoint that represent the linked element.

3.9 API Evolution

API Evolution patterns define governing rules balancing stability and compatibility with maintainability and extensibility such as:

3.9.1 Version Identifier

Version identifier is added as a Metadata Element to the endpoint address, protocol header or the message payload to indicate possibly incompatible changes to clients.

3.9.2 Semantic Versioning

Semantic Versioning introduces a hierarchical three-number versioning schema x.y.z, which allows API providers to denote level of changes as major, minor and patch versions.

3.9.3 Commissioning and Decommissioning

The variants for version introduction and decommissioning decision include Two in Production, Limited Lifetime Guarantee, and Aggressive Obsolescence.

3.9.4 Experimental Preview

Experimental Preview provides access to an API to receive early feedback from consumers without making any commitments about the functionality, stability and longevity.

4. Pattern Language Introduction

This chapter introduces a pattern language, basic scoping and structural patterns. Many of the patterns builds upon Enterprise Integration Patterns and Gof Design Patterns when defining structure of a message. This chapter categorizes patterns into Foundation patterns, Responsibility patterns, Structure patterns, Quality patterns and Evolution patterns. These patterns also follow Design Refinement Phases based on Unified Process:

  • Inception – Foundation
  • Elaboration – Responsibility and Quality
  • Construction – Structure, Responsibility and Quality
  • Transition – Foundation, Quality and Evolution

4.1 Foundations: API Visibility and Integration Types

These patterns deal with types of systems, subsystems and components as well as where should an API be accessible. The API integration types can be Frontend Integration and Backend Integration. The Frontend Integrations, also referred as vertical integrations are consumed by API clients in application frontends. The cloud-native applications and microservices-based system benefit with Backend Integration, sometimes called horizontal integration to access information or activity in other systems. The API visibility alternatives can be Public API, Community API and Solution-Internal API. Public API specifies endpoints, operations, message representation, quality of service and lifecycle model that can be accessed by unlimited or unknown number of API clients and can be controlled with API keys. You may apply other patterns such as Version Identifiers, Pricing Plan, Rate Limit and Service Level Agreement with Public APIs. The Community API are only available to a community that may consists of different organizations. The Solution-Internal API is also referred as Platform API that may be exposed in a single cloud provider offering.

4.2 Basic Structure Patterns

The structure patterns looks at the number of representation elements for request and response messages and decides how these elements should be grouped. These patterns include Atomic Parameter that describes plain data such as text and numbers; Atomic Parameter List that groups several elementary parameters; Parameter Trees that provide nested parameters; and Parameter Forest that groups multiple tree parameters.

5. Define Endpoint Types and Operations

This chapter corresponds to Define phase of the Align-Define-Design-Refine (ADDR) process and describes high-level endpoint identification activities. The authors looks at user stories, event storming or other collaboration techniques to define API roles and responsibilities. The design of API contracts also have to define developer experience in terms of function, stability, ease of use, clarity. Other quality attributes that the API designer have to decide include: Accuracy for functional correctness including preconditions, invariants and postconditions; Distribution of control and autonomy between API client and provider; Scalability, performance and availability with Service Level Agreements for mission-critical APIs; Manageability for monitoring APIs; Consistency and atomicity for all-or-nothing semantics; Idempotence property; Auditability for risk management.

5.1 Endpoint Roles (aka Service Granularity)

The two general endpoint roles are Processing Resource and Information Holder Resource. The Processing Resource role allows remote clients to trigger an action and related design concerns include contract expressiveness and service granularity; learnability and manageability; semantic interoperability; response time; security and privacy; and compatibility and evolvability. Information Holder Resource exposes domain data in API and it may use Domain-driven design and object-oriented analysis and design to model the data. Other related concerns include quality attribute conflicts and trade-offs; security; data freshness vs consistency; and compliance with architectural design principles. Related patterns include:

  • Operational Data Holder to create, read, update and delete its data often and fast.
  • Master Data Holder to access master data that lives for long time, doesn’t change and will be referenced from many clients. The request and response messages of Master Data often take the form of Parameter Trees and master data update may come in the form of coarse-grained full updates or fine-grained partial updates.
  • Reference Data Holder is used to lookup reference data that is long lived and is immutable for clients using API endpoints. Its desired qualities include Do not repeat yourself (DRY) and performance vs consistency trade-off for read access.
  • Link Lookup Resource allows referring to other resources so that clients remain loosely coupled if API provider changes the destination of links. The design challenges include: coupling between clients and endpoints; dynamic endpoint references; centralization vs decentralization; message sizes, number of calls, resource use; dealing with broken links; and number of endpoints and API complexity.
  • Data Transfer Resource allows exchanging data between participants without knowing each other, without being available at the same time. The design considerations include coupling (time and location dimensions); communication constraints; reliability; scalability; storage space efficiency; latency; and ownership management. You may introduce a shared storage endpoint with a State Creation Operation and Retrieval Operation. The pattern properties include coupling (time and location dimensions); communication constraints; reliability; scalability; storage space efficiency; latency; ownership management; access control; (lack of) coordination; optimistic locking; polling; and garbage collection.

5.2 Operation Responsibilities

The four operation responsibilities include:

  • State Creation Operation to allow its clients that something has happened, e.g. to trigger instant or later processing. This design concerns include: coupling trade-offs (accuracy and expressiveness vs information parsimony); timing; consistency; and reliability. It may or may not have fire-and-forget semantics and idempotency may be needed for the transaction boundary. A popular variant of this pattern is Event Notification Operation, notifying the endpoint about an external event.
  • Retrieval Operation to retrieve information and allow further client-side processing. The design issues include: veracity, variety, velocity and volume; workload management; network efficiency vs data parsimony.
  • State Transition Operation to allow a client initiate a processing action that causes the provider-side application state to change. The design concerns include service granularity; consistency and auditability; dependencies on state changing being made beforehand; workload management; and network efficiency vs data parsimony. State Transition Operations are generally transactional supporting ACID behavior and may use ABAC for compliance and security controls.
  • Computation Function to allow client invoke side-effect-free remote processing on the provider side based on the input parameters. The relevant design issues include reproducibility and trust; performance; and workload management. Its examples include a Transformation service, Validation service, and Long Running Computation.

6. Design Request and Response Message Representations

This chapter corresponds to Design phase of the Align-Define-Design-Refine (ADDR) process and examines structural patterns for requests and responses. The challenges when designing message representations include interoperability on protocol and message-content; latency; throughput and scalability; maintainability; and developer experience.

6.1 Data Element

The Data Element pattern allow exchanging application-level information between API clients and API providers without decoupling them. The competing force concerns include rich functionality vs ease of processing and performance; security and data privacy vs ease of configuration; and maintainability vs flexibility.

6.2 Metadata Element

Metadata Element pattern allows enriching messages with additional information so that receiver can interpret the message content correctly. The design concerns include interoperability; concerns; and ease of use vs runtime efficiency. The variants of this pattern include Control Metadata Element such as identifiers, flags, filter, ACL, API eys, etc; Aggregated Metadata Elements such as counters of Pagination and statistical information; Provenance Metadata Elements such as message/request IDs, creation date, version numbers, etc.

6.3 ID Element

ID Element pattern helps identify elements of the Published Language using UUID or surrogate key when applying domain-driven design. The identification problems include effort vs stability; reliability for machines and humans; and security.

6.3 Link Element

Link Element is used to reference API endpoints and operations in request and response message payloads so that they can be called remotely.

6.4 API Key

API Key allows API provider to identify and authenticate clients and their requests. The design issues include establishing basic security; access control; avoiding the need to store or transmit user credentials; decoupling clients form their organizations; security vs ease of use; and performance.

6.5 Error Report

Error Report allows API provides inform its clients about communication and processing faults. The design concerns include expressiveness and target audience expectations; robustness and reliability; security and performance; interoperability and portability; and internationalization.

6.6 Context Representation

Context Representation allows API consumers and providers exchange context information without relying on any particular remoting protocols. The design considerations include interoperability and modifiability; dependency on evolving protocols; developer productivity (control vs convenience); diversity of clients and their requirements; end-to-end security; and logging and auditing on business domain level.

7. Refine Message Design for Quality

This chapter reviews API Quality patterns related to the Design and Refine phases of the Align-Define-Design-Refine (ADDR) process. The major challenges with API Quality include message sizes vs number of requests; information needs of individual clients; network bandwidth usage vs computation efforts; implementation complexity vs performance; statelessness vs performance; and ease of use vs legacy.

7.1 Message Granularity

The message granularity patterns deal with performance and scalability; modifiability and flexibility; data quality; data privacy; and data freshness vs consistency. These patterns include:

  • Embedded Pattern allows placing Data Element in the request or response to avoid exchanging multiple messages.
  • Linked Information Holder can be used to keep the message small when an API deals with multiple information elements that reference each other.

7.2 Client-Driven Message Content (aka Response Shaping)

These patterns deal with performance, scalability, and resource use; information needs of individual clients; loose coupling and interoperability; developer experience; security and data privacy; and test and maintenance effort. These patterns include:

  • Pagination allows an API provider deliver large sequence of structured data without overwhelming clients. The design concerns include session awareness and isolation; and data set size and data access profile. The variants of this pattern include Page-Based Pagination, Cursor-Based Pagination, Time-Based Pagination and Offset-Based Pagination.
  • Wish List allows an API client inform the API provider at runtime about the data it is interested in.
  • Wish Template allows API client inform the API provider about nested data that it is interested in. For example, Wish Templates of GraphQL are the query and mutation schemas providing declarative descriptions of the client requirements.

7.3 Message Exchange Optimization (aka Conversation Efficiency)

These patterns provide balance competing forces for complexity of endpoint, client, and message payload design; and accuracy of reporting and billing. These patterns include:

  • Conditional Request prevents unnecessary server-side processing and bandwidth usage by invoking API operation only when the condition is true. The design concerns include size of message; client workload; provider workload; and data currentness vs correctness. Its variants include Time-Based Conditional Request (e.g. If-Modified-Since HTTP header) and Fingerprint-Based Conditional Request (e.g. ETag and If-None-Match HTTP headers).
  • Request Bundle works as a data container that assembles multiple independent requests in a single request message with unique request identifiers.

8. Evolve API

This chapter reviews API Evolution patterns related to the Refine phase of the Align-Define-Design-Refine (ADDR) process. The major challenges with API Evolution autonomy, loose coupling, extensibility, compatibility and sustainability. The patterns in this chapter include:

8.1 Versioning and Compatibility Management

These patterns include:

  • Version Identifier allows an API provider indicate its current capabilities as well as existence of possibly incompatible changes to clients. The design concerns include accuracy vs exact identification; no accidental breakage of comparability; client-side impact; and traceability of API versions in use.
  • Semantic versioning allow stakeholders compare API versions to detect incompatible changes. The design concerns include minimal effort to detect version incompatibility; clarity of change impact; clear separation of changes with different levels of impact and compatibility; manageability of API versions and related governance effort; and clarity with regard to evolution timeline. The common numbering scheme in Semantic Versioning include major version, minor version and patch version.

8.2 Life-Cycle Management Guarantees

These patterns include:

  • Experimental Preview allows providers make the introduction of a new API or new API versions, less risky for their clients and obtain early adopter feedback without freezing the API design prematurely. The design considerations include innovation and new features; feedback; focus effort; early learning and security.
  • Aggressive Obsolescence allows API providers reduce the effort for maintaining an entire API by removing unused or deprecated features. The design concerns include minimizing the maintenance effort; reducing forced changes to clients in a given time span as a consequence of API changes; repeating/acknowledging power dynamics; and commercial goals and constraints. The API version is marked as Release, Deprecate or Decommission in the lifecycle of Aggressive Obsolescence.
  • Limited Lifetime Guarantee allows a provider let clients know for how long they can rely on the published version of an API. The design considerations include make client-side changes caused by API changes plannable; and limit the maintenance effort for supporting old clients.
  • Two in Production allows a provider gradually update an API without breaking existing clients but also without maintaining a large number of API versions in production. The design concerns include allow the provider and the client to follow different life cycles; guarantee that API changes do not lead to undetected backward compatibility problems between clients and provider; ensure the ability to rollback if a new API version is designed badly; minimize changes to the client; minimize the maintenance effort for supporting clients relying on old API versions.

9. Document and Communicate API Contracts

This chapter does not correspond to any phase of the Align-Define-Design-Refine (ADDR) process. The challenges for Documenting APIs include interoperability; compliance; information hiding; economic aspects; performance and reliability; meter granularity; attractiveness from a consumer point of view. The patterns in the chapter include:

  • API Description to share knowledge between API provider and its clients and related concerns include interoperability; consumability; information hiding; extensibility and evolvability.
  • Pricing Plan to allow API provider meter API service consumption and charge for it. Its variants include Subscription-based Pricing, Usage-based Pricing, and Market-based Pricing.
  • Rate Limit to allow API provider prevent API clients from excessive API usage with design considerations for economic aspects; performance; reliability; impact and severity of risks of API abuse; and client awareness.
  • Service Level Agreement to allow API client learn about the specific quality-of-service characteristics of an API and its endpoint operations. The design concerns include business agility and vitality; attractiveness from the consumer point of view; availability; performance and scalability; security and privacy; government regulations and legal obligations; and cost-efficiency and business risks from a provider point of view.

10. Real-World Pattern Stories

This chapter examines API design and evolution in real-world business domains. The first case-study discusses large-scale process integration in Terravis, a Swiss Mortgage business had to adopt a new law for the digitization of Swiss land registry businesses. In the context dimensions defined by Philippe Krutchten, the Terravis platform was characterized in terms of system size, system criticality, system age, team distribution, rate of change, preexistence of stable architecture, governance and business model. Terravis applied role and status of API as well as other patterns such as Solution-Internal API, Community API, API Description, Service Level Agreements, Semantic Versioning, Error Report, Pricing Plan, Rate Limit, Context Representation, State Creation Operation, State Transition Operations, Pagination, etc. The other case-study showed how an internal system at the concrete column manufacturer SACAC had to integrate different existing software such as ERP and CAD systems. The chapter used the Philippe Krutchten’s project dimensions to describe the project. The key challenges included correctness of all calculations and the solution applied book’s guidelines for roles and status of API. The API used Solution-Internal, Frontend Integration, Backend Integration, State Creation Operations, State Transition Operations, Retrieval Operations, Computations Functions, etc.

11. Conclusion

The last chapter concludes with how the pattern language in the book helps integration architects, API developers and other roles involved with API design and evolution. The authors also suggest how APIs can be refactored to the patterns described in the book and use Microservice Domain Specific Language (MDSL) Tools for refactoring. The chapter also describes advancements in API protocols and standards such as HTTP/2, HTTP/3, and gRPC. OpenAPI Specification is the dominant API description language for HTTP-based APIs and AsyncAPI is gaining adoption for message-based APIs, which can also generate MDSL bindings.

May 21, 2023

Heuristics from “Code That Fits in Your Head”

Filed under: Methodologies,Technology,Uncategorized — admin @ 5:00 pm

The code maintenance and readability are important aspects of writing software systems and the “Code That Fits in Your Head” comes with a lot of practical advice for writing maintainable code. Following are a few important heuristics from the book:

1. Art or Science

In the first chapter, the author compares software development with other fields such as Civil engineering that deals with design, construction, and maintenance of components. Though, software development has these phases among others but the design and construction phases in it are intimately connected and requires continuous iteration. Another metaphor discussed in the book is thinking software development as a living organism like garden, which makes more sense as like pruning weeds in garden, you have to refactor the code base and manage technical debt. Another metaphor described in the book is software craftsmanship and software developer may progress from apprentice, journeyman to master. Though, these perspectives help but software doesn’t quite fit the art metaphor and author suggests heuristics and guidelines for programming. The author introduces software engineering that allows a structured framework for development activities.

2. Checklists

A lot of professions such as airplane pilots and doctors follow a checklist for accomplishing a complex task. You may use similar checklist for setting up a new code-base such as using Git, automating the build, enabling all compiler error messages, using linters, static analysis, etc. Though, software engineering is more than following a checklist but these measures help make small improvements.

3. Tackling Complexity

This chapter defines sustainability and quality as the purpose for the book as the software may exists for decades and it needs to sustain its organization. The software exists to provide a value, though in some cases the value may not be immediate. This means at times, worse technology succeeds because it provides faster path to the value and companies which are too focus on perfection can run out of business. Richard Gabriel coined the aphorism that worse is better. The sustainability chooses middle ground by moving in right direction with checklists and balanced software development practices. The author compares computer with human brain and though this comparison is not fair and working memory of humans is much smaller that can hold from four to seven pieces of information. This number is important when writing a code as you spend more time reading the code and a code with large number of variables or conditional logic can make it harder to understand. The author refers to the work of Daniel Kahneman who suggested model of thoughts comprising two systems: System 1 and System 2. When a programmer is in the zone or in a flow, the system 1 always active and try to understand the code. This means that writing modular code with a fewer dependencies, variables and decisions is easier to understand and maintain. The human brain can deal with limited memory and if the code handles more than seven things at once then it will lead to the complexity.

4. Vertical Slice and Walking Skeleton

This chapter recommends starting and deploying a vertical slice of the application to get to the working software. A vertical slice may consists of multiple layers but it gives an early feedback and is a working software. A number of software development methodologies such as Test-driven development, Behavioral-driven development, Domain-driven design, Type-driven development and Property-driven development help building fine-grained implementations with tests. For example, if you don’t have tests then you can use characterization test to describe the behavior of existing software. The tests generally follow Arranage-Act-Assert phases where the arrange phase prepares the test, the act phase invokes the operation under test and the assert phase verifies the actual outcome. The documentation can further explain why decisions in the code were made. The walking skeleton helps vertical slice by using acceptance-test-driven development or outside-in-test-driven development. For example, you can pick a simplest feature to implement that aims for the happy path to demonstrate that the system has a specific capability. The unit-tests will test this feature by using Fake Object, data-transfer-object (DTO) and interfaces (e.g. RepositoryInterface). The dependencies are injected into tests with this mock behavior. The real objects that are difficult tests can use a Humble Object pattern and drain the object of branching logic. Making small improvements that are continuously delivered also keep stakeholders updated so that they know when you will be done.

5. Encapsulation

The encapsulation hides details by defining a contract that describes the valid interactions between objects and callers. The parameterized tests can capture the desired behavior and assert the invariants. The incremental changes can be added using test-driven development that uses red-green-refactor where you first write a failing test, then make the test pass and then refactor to improve the code. When using a contract to capture the interactions, you can use Postel’s law to build resilient systems, i.e.,

Be conservative in what you send, be liberal in what you accept.

The encapsulation guarantees that an object is always valid, e.g. you can use a constructor to validate all invariants including pre-conditions and post-conditions.

6. Triangulation

As the working memory for humans is very small, you have to decompose and compartmentalize the code structure into smaller chunks that can be easily understood. The author describes a devil’s advocate technique for validating behavior in the unit tests where you try to pass the tests with incomplete implementation, which tells you that you need more test cases. This process can be treated as kind of triangulation:

As the tests get more specific, the code gets more generic

7. Decomposition

The code rot occurs because no one pays attention to the overall quality when making small changes. You can use metrics to track gradual decay such as cyclomatic complexity should be below seven. In order to improve the code readability, the author suggests using 80/24 rule where you limit a method size to be no more than 24 lines and width of each line to be no more than 80 characters. The author also suggests hex flower rule:

No more than seven things should be going on in a single piece of code.

The author defines abstraction to capture essence of an object, i.e.,

Abstraction is the elimination of the irrelevant and the amplification of the essential.

Another factor that influences decomposition is cohesion so that code that works on the same data structure or all of its attributes is defined in the same module or class. The author cautions against the feature envy to decrease the complexity and you may need to refactor the code to another method or class. The author refers to a technique “parse, don’t validate” when validating an object so that the validate method takes less-structured input and produces more-structured output. Next, author describes fractal architecture where a large system is decomposed into smaller chunks and each chunk hides details but can be zoomed in to see the structure. The fractal architecture helps organize the code so that lower-level details are captured in a single abstract chunk and can easily fit in your brain.

8. API Design

This chapter describes principles of API design such as affordance, which uses encapsulation to preserve the invariants of objects involved in the API. The affordance allows a caller to invoke an API only when preconditions are met. The author strengthen the affordance with a poka-yoke (mistake proof) analogy, which means a good interface design should be hard to misuse. Other techniques in the chapter includes: write code for the readers; favor well-named code over comments; and X out names. The X out names replaces API name with xxx and sees if a reader can guess what the API does. For example, you may identify APIs for command-query separation where a method structure like void xxx() can be considered as command with a side effect. In order to communicate the intent of an API, the author describes a hierarchy of communication such as using API’s distinct types, helpful names, good comments, automated tests, helpful commit messages and good documentation.

9. Teamwork

In this chapter, the author provides tips for teamwork and communication with other team mates such as writing good Git commit messages using 50/72 rule where you first write a summary no wider than 50 characters, followed by a blank line and then detailed text with no wider than 72 characters. Other techniques include Continuous Integration that generally use trunk or main branch for all commits and developers make small changes optionally with feature-flags that are frequently merged. The developers are encouraged to make small commits and the code ownership is collective to decrease the bus factor. The author refers to pair programming and mob programming for collaboration within the team. In order to facilitate the collaboration, the author suggests reducing code review latency and rejecting any large change set. The reviewers should be asking whether they can maintain the code, is the code intent clear and could it be further simplified, etc. You can also pull down the code and test it locally to further gain the insight.

10. Augmenting Code

This chapter focuses on refactoring existing code for adding new functionality, enhancing existing behavior and bug fixes. The author suggests using feature-flags when deploying incomplete code. The author describes the strangler pattern for refactoring with incremental changes and suggests:

For any significant change, don’t make it in-place; make it side-by-side.

The strangler pattern can be applied at method-level where you may add a new method instead of making in-place change to an existing method and then remove the original method. Similarly, you can use class-level strangler to introduce new data structure and then remove old references. The author suggests using semantic versioning so that you can support backward compatible or breaking changes.

11. Editing Unit Tests

Though, with an automated test suite, you can refactor production code safely but there is no safety net when making changes to the test code. You can add additional tests, supplement new assertions to existing tests or change unit tests to parametersized tests without affecting existing behavior. Though, some programmers follow a single assertion per test and consider multiple assertions an Assertion Roulette but author suggests strengthening the postconditions in unit tests with additional assertions, which is somewhat similar to the Liskov Substitution Principle that says that subtypes may weaken precondition and strengthen postconditions. The author suggests separating refactoring of test and production code and use IDE’s supported refactoring tools such as rename, extract or move method when possible.

12. Troubleshooting

When troubleshooting, you first have to understand what’s going on. This chapter suggests using scientific method to make a hypothesis, performing the experiment and comparing the outcome to prediction. The author also suggests simplifying and removing the code to check if a problem goes away. Other ways to simplify the code include composing an object graph in code instead of using complex dependency injection; using pure functions instead of using mock objects; merging often instead of using complex diff tools; learning SQL instead of using convoluted object-relational mapping, etc. Another powerful technique for troubleshooting is rubber ducking where you try to explain the problem and gain a new insight in the process. In order to build quality, you should aim to reduce defects to zero. The tests also help with troubleshooting by writing an automated test to reproduce defects before fixing so that they serve as a regression test. The author cautions against slow tests and non-deterministic defects due race conditions. Finally, the author suggests using bisection that uses a binary search for finding the root cause where you reproduce the defect in half of the code and continue until you find the problem. You can also use bisection feature of Git to find the commit that introduced the defect.

13. Separation of Concerns

The author describes Kent Beck’s aphorism:

Things that change at the same rate belong together. Things that change at different rates belong apart.

The principle of separation of concerns can be used for decomposing working software into smaller parts, which can be decomposed further with nested composition. The author suggests using command query separation principle to keep side effects separated from the query operations. Object-oriented composition tends to focus on composing side effects together such as Composite design pattern, which lead to complex code. The author describes Sequential Composition that chains methods together and Referential Transparency to define a deterministic method without side effects. Next, the author describes cross cutting concerns such as logging, performance monitoring, auditing, metering, instrumentation, caching, fault tolerance, and security. The author finally describes Decorator pattern to enhance functionality, e.g., you can add logging to existing code without changing it and log actions from impure functions.

14. Rhythm

This chapter describes daily and recurring practices that software development teams follow such as daily stand-ups. The personal rhythm includes time-boxing or using Pomodoro technique; taking a break; using time deliberately; and touch type. The team rhythm includes updating dependencies regularly, scheduling other things such as checking certificates. The author describes Conway’s law:

Any organization that design a system […] will inevitably produce a design whose structure is a copy of the organization’s communication structure.

You can use this law to organize the work that impacts the code base.

15. The Usual Suspects

This chapter covers usual suspects of software engineering: architecture, algorithms, performance, security and other approaches. For example, performance is often a critical aspect but premature optimization can be wasteful. Instead correctness, an effort to reduce complexity and defects should be priority. In order to implement security, the author suggests STRIDE threat modelling, which includes Spoofing, Tempering, Repudiation, Information disclosure, Denial of service and Elevation of privilege. Other techniques include property-based testing and Behavioral code analysis can be used to extract information from Git to identify patterns and problems.

16. Tour

In this last chapter, the author shows tips on understanding an unfamiliar code by navigating to the main method and finding the way around. You can check if the application uses broader patterns such as Fractal architecture, Model-View-Controller and understands authentication, authorization, routing, etc. The author provides a few suggestions about code structure and file organization such as putting files in one directory though it’s a contestable advice. The author refers to the Hex flower and fractal architecture where you can zoom in to see more details. When using a monolithic architecture, the entire production code compiles to a single executable file that makes it harder to reuse parts of the code in new ways. Another drawback of monolithic architecture is that dependencies can be hard to manage and abstraction can be coupled with implementation, which violates the Dependency Inversion Principle. Further in order to prevent cyclic dependencies, you will need to detect and prevent Acyclic Dependency Principle. Finally, you can use test suite to learn about the system.

Summary

The book is full of practical advice on writing maintainable code such as:

  • 50/72 Rule for Git commit messages
  • 80/24 Rule for writing small blocks of code
  • Tests based on Arrange-Act-Assert and Red-Green Refactor
  • Bisection for troubleshooting
  • Checklists for a new codebase
  • Command Query Separation
  • Cyclomatic Complexity and Counting the Variables
  • Decorators for cross-cutting concerns
  • Devil’s advocate for improving assertions
  • Feature flags
  • Functional core and imperative shell
  • Hierarchy of communication
  • Parse, don’t validate
  • Postel’s law to maintain invariants
  • Regularly update dependencies
  • Reproduce defects as Tests
  • Review code
  • Semantic Versioning
  • Separate refactoring of test and production code
  • Strangler pattern
  • Threat modeling using STRIDE
  • Transformation priority premise to make small changes and keeping the code in working condition
  • X-driven development by using unit-tests, static code analysis, etc.
  • X out of Names

These heuristics help make the software development sustainable so that the team can make incremental changes to the code while maintaining high quality.

May 9, 2023

Applying Domain-Driven Design and Clean/Onion/Hexagonal Architecture to MicroServices

Filed under: Computing,Design — admin @ 8:41 pm

1. Abstract

In software design, modular design facilitates building large systems by decomposing functionality into independent modules where each module defines an interface for the behavior it implements. The modular design evolved into component-based design that emphasized separation of concerns and into distributed systems, which gave rise to web services, service-oriented architectures and event-driven architectures. This evolution led to Microservices architecture in which each service defines a bounded-context for the business domain of its functionality. Each service is autonomous, agile, loosely coupled, resilient, reliable, independently deployable and scalable. This architecture encourages use of abstraction, single-responsibility, DRY, dependency-inversion, common-closure, common-reuse, release-equivalence and persistence Ignorance principles. The software development teams often use Inverse Conway Maneuver to define a clear ownership of the service, which improves developer velocity. As cloud computing gained wider adoption over the last 15 years, microservice architecture was extended with the architecture of cloud native applications (CNA), which offer properties of Isolated State, Distribution, Elasticity, Automation, and Loose Coupling (IDEAL). The extended benefits of CNA and Microservice architecture include:

  • Fit for purpose
  • Rightsized and modular
  • Elasticity
  • Sovereign and tolerant
  • Resilient and protected
  • Controllable and adaptable
  • Workload-aware and resource-efficient
  • Agile and tool-supported
  • Observability including metric, tracing, and logging
  • Resilience
  • Availability
  • Independent, autonomous
  • Zero-Trust Security
  • Automation
  • Decentralized governance

2. Applying Domain-Driven Design

Following sections examines primary concepts from the domain driven design:

2.1 Layers

The domain-driven design by Eric Evans simplifies the architecture of microservices, which builds upon layer architecture such as:

  • presentation layer for user-interface.
  • application layer for use-cases that define the behavior.
  • domain layer for representing business rules and domain model.
  • infrastructure layer for data access and persistence for the domain objects based on Persistence Ignorance and Infrastructure Ignorance principles.
DDD Layers

2.2 Domain Model

The software development team and domain experts define model using workshop based Event Storming. The domain layer employs defines following types of model:

  • Entity is a mutable object defined not by their attributes, but rather by a thread of continuity and identity.
  • Value object is an immutable object defined by their attributes instead of an identifier.
  • Domain events to notify data update.

The domain-driven design recommends rich behavior in entity objects in addition to the data attributes and cautions against AnemicDomainModel that only hold data attributes.

2.3 Aggregates

The entities and values can be clustered into aggregates that become a unit for retrieving and persisting data together. An aggregate entity becomes root for controlling lifecycle and access to the objects inside its boundary.

2.4 Ubiquitous Language

The domain model employs Ubiquitous Language to bring domain experts and software development team together and eliminate inaccuracies, contradictions and confusion from the model.

2.5 Services

The services define high-level business logic that doesn’t fit within the domain objects. The services are generally designed as stateless with clearly defined interfaces.

2.6 Repositories

The repositories implement data persistence logic for retrieving and persisting aggregate and entity objects.

2.7 Factories

The factories help create complex objects, values and aggregates.

2.8 Bounded Context

The Bounded Context defines the boundaries of the domain model, which may consists of other sub-domains. This becomes foundation for the boundary of microservices, where each service is cohesive and loosely coupled that avoids chatty communication between microservices.

2.9 Context Map

The context map help define boundaries of bounded context explicitly to prevent Big Ball of Mud architecture with following patterns:

  • Shared Kernel shares a common domain model between teams.
  • Partnership with mutual dependency between teams.
  • Customer-Supplier defines an interface that supplier implements and consumer consumes it.
  • Open Host Service / Published Language relies on well documented or readily available information for integration.
  • Conformist where the downstream team conforms to the model of the upstream without any translation of models.
  • Anticorruption Layer isolates and abstracts the downstream’s models from external system’s models by translation.

3. Applying Hexagonal Architecture

The hexagonal architecture or ports & adapter architecture by Alistair Cockburn defines ports to receive incoming requests, which is then translated to internal message or procedure by an adapter. Similarly, when the application need to connect to external systems on the driven side, it sends a message through a port to an adapter. The port and adapter architecture decouples driver side and driven side from the implementation technology.

Hexagonal Architecture

The port uses a protocol or an application program interface (API) for communicating with the application, which is then translated by the adapter for internal consumption. When the application needs to connect to external systems such as database, it goes through similar port or interface and is then translated into underlying database protocol by the adapter. This architecture essentially uses Dependency Inversion and Inversion of control Principles by only depending on the ports and decoupling external and internal components from the implementation technology. The application is the core of the system that defines use-cases that can be triggered by CLI or UI. The application layer internally contains commands, handlers and services, which receives commands or queries from ports and communicates with external systems via ports and adapters. The application layer may trigger application events as an outcome of a use-case. The domain layer defines domain model and domain specific services, which are used by the application layer. The driver or primary side in hexagonal architecture allows users to initiate communication with the application core and the driven or secondary side within the application core initiates communication with external dependent systems.

4. Applying Onion Architecture

The Onion Architecture defines concentric circles for layers where all code can depend on layers more central, but code cannot depend on layers further out from the core. The Domain Model represents the state and behavior combination that models truth for the organization. The number of layers in application core will vary but it has domain model at the center. The interfaces for repository to to retrieve and persist data surrounds domain model and the interfaces for repository are defined in the application core. The Onion Architecture uses the Dependency Inversion principle to inject implementations for the interfaces defined in the application core.

Onion Architecture

5. Applying Clean Architecture

The Clean architecture defines concentric circles to represent different areas of software and uses dependency rule to point dependency inwards.

Clean Architecture
  • The entities encapsulate business rules with behavior and data structure.
  • The use-cases encapsulates application specific use-cases and orchestrates flow of data to and from the entities.
  • The interface adapters convert data from the use cases and entities to external systems, which are used by presenters, views and controllers.
  • The frameworks and drivers layer is composed of frameworks and tools such as the database and web framework.

The Clean Architecture uses Dependency Inversion Principle to communicate across boundaries with interfaces and inner circle does not depend on outer circle.

6. Related Design Patterns

6.1 Model-driven architecture

Model-driven engineering and Model-driven architecture facilitate domain-driven design by generating source code, documentation, tests, etc. from the domain model.

6.2 Command Query Responsibility Segregation

Command Query Responsibility Segregation (CQRS) coined by Bertrand Meyer generalizes message-driven and event-driven architecture by segregating behavior for querying the data and updating the data.

6.3 Event sourcing

Event sourcing tracks internal by reading and committing events to an event store.

7. Putting it all together

Following sections describe how a library management system can be built with the domain driven and hexagonal/clean architecture:

7.1 User Stories

Following is a list of primary user-stories that will be implemented for the library management system:

  • As a library administrator, I want to add a book to the collection so that patrons of the library may checkout it.
  • As a library administrator, I want to remove a book from the collection so that it’s no longer available to borrow.
  • As a library administrator or a patron, I want to search books based on different criteria such as title, author, publisher, dates, etc. so that I may see details or use it to checkout the book later.
  • As a patron, I want to checkout a book so that I can read it and return later.
  • As a patron, I want to return a checked out book when I am done with reading.
  • As a patron, I want to hold a book, which is not currently available so that I can checkout later.
  • As a patron, I want to cancel the hold that previously made when I am no longer interested in the book.
  • As a patron, I want to checkout the book hold that previously made so that I can read it.

7.1.1 Constraints and Validation Policies

In addition, the library may impose certain policies and restrictions on the books and checkout/hold actions such as restricted book can be held by a researcher patron or limit the number of books that can be held or checkout at a time.

7.2 Layered Architecture

The library management application divides the application into multiple domains such as patrons, catalog, checkout and hold. Each domain then divides into following layers:

7.2.1 Application-Service and Controller Layer

This layer defines remote APIs for communicating with the microservices defined in the library management application.

7.2.2 Command and Query Layer (CQRS)

This layer defines operations for commands and queries that are invoked by the controller layer, which depend on underlying domain service layer. The command layer also defines the scope of a transaction so that all changes are persisted atomically.

Note: This layer may use SAGA pattern to handle distributed transactions when you need to invoke multiple services or databases for performing an operation.

7.2.3 Domain Service Layer

This layer defines additional business behavior that is built upon the domain model layer and is used by the commands and queries layer

7.2.4 Domain Model Layer

This layer defines data and behavior of the domain and defines entity, value, aggregates, factories, and interface to model the domain.

7.2.5 Infrastructure Layer

This layer defines repositories and gateways to persist domain entities and connect to external services such as messaging and logging.

7.3 Domain Model

Following domain model was defined as a result of above use-stories and an event-storming exercise:

Class Diagram

7.3.1 Party Pattern

Above design uses party-pattern to model patrons, library administrator and library branches because they share a lot of common attributes to describe people and organizations. The base Party class uses Address class to store physical address, so the Party class acts as an Aggregate for all related data about people and organizations.

7.3.2 Book

The book class models a library book that can be added to the library collection, queried by the administrators or patrons and then checked out or held by the patrons.

7.3.3 Checkout

The Checkout class abstracts the data when checking out a book, which can be returned later.

7.3.4 Hold

The Hold class abstracts the data when holding a book that is not currently available so that it can be checked out later.

Note: The domain driven design considers anemic domain without business behavior an antipattern so above domain model defines invariant business rules and behavior along with the data attributes.

7.4 Components and Modules

The library management application was divided into following modules:

components
Component Diagram

The core, utils and gateway module is shared by other modules; the parties and books module define low-level modules and catalog, patrons, checkout and hold modules define high-level modules, which also act as bounded context for managing books-catalog, patrons for managing library members and checkout/hold modules for defining behavior for the library operations.

7.4.1 core module

The core module abstracts common domain model, domain events and interfaces for command pattern, repository and controllers.

7.4.2 parties module

The parties module defines domain model for the party class and data access methods for persisting and querying parties (people and organizations).

7.4.3 books module

The books module defines domain model and data transfer model for books as well as repository for persisting and querying books using AWS DynamoDB.

7.4.4 patrons module

The patron module built upon the parties module and defines service sub-module for business logic to query and persisting patrons. The patron module also includes controller, command classes and binary/main module to define microservices based on AWS Lambda.

7.4.5 checkout module

The checkout module implements services for checking out and returning book, which are then made available as microservices using controller, command and binary sub-modules.

7.4.6 hold module

The hold module implements services for holding a book or canceling/returning it later, which are then made available as microservices using controller, command and binary sub-modules.

7.4.7 gateway module

The gateway module defines interfaces to connect to external services such as AWS CloudWatch for managing metrics and AWS SNS for publishing events from the domain and user-action changes.

7.5 Domain-Driven Design Patterns

The library management systems applies following design patterns from the domain driven design and hexagonal/clean architecture:

7.5.1 Ubiquitous Language

The domain model uses the same terminology used by the stakeholders and experts from underlying problem space such as library patrons, books, checkout, hold, etc. so that software development team can model the business problem as close as possible.

7.5.2 Domain Events

The domain events capture data change in the domain model specifically aggregate objects. This decouples domains in different bounded context as other domains can listen to the domain events asynchronously and make a local change. Following is an example of domain events in the library management systems:

#[derive(Debug, PartialEq, Serialize, Deserialize)]
pub(crate) struct DomainEvent {
    pub event_id: String,
    pub name: String,
    pub group: String,
    pub key: String,
    pub kind: DomainEventType,
    pub metadata: HashMap<String, String>,
    pub json_data: String,
    #[serde(with = "serializer")]
    pub created_at: NaiveDateTime,
}

7.5.3 Aggregates and Event Stream

The modules and domain model communicate with each other using event streams, which is built upon AWS SNS service underneath, e.g.,

impl EventPublisher for SESPublisher {
    async fn publish(&self, event: &DomainEvent) -> Result<(), LibraryError> {
        let topic = self.topics.get(event.name.as_str());
        if let Some(arn) = topic {
            let json = serde_json::to_string(event)?;
            self.client.publish().topic_arn(arn).message(json).send().await?;
            Ok(())
        } else {
            Err(LibraryError::runtime(format!("topic is not found {}", event.name).as_str(), None))
        }
    }
}

Following example depicts publishing an event as a result of checking out a book:

    async fn checkout(&self, patron_id: &str, book_id: &str) -> LibraryResult<CheckoutDto> {
        let patron = self.patron_service.find_patron_by_id(patron_id).await?;
        let book = self.catalog_service.find_book_by_id(book_id).await?;
        if book.status() != BookStatus::Available {
            return Err(LibraryError::validation(format!("book is not available {}",
                                                        book.id()).as_str(), Some("400".to_string())));
        }
        if book.is_restricted() && patron.is_regular() {
            return Err(LibraryError::validation(format!("patron {} cannot hold restricted books {}",
                                                        patron.id(), book.id()).as_str(), Some("400".to_string())));
        }
        let checkout = CheckoutDto::from_patron_book(self.branch_id.as_str(), &patron, &book);
        self.checkout_repository.create(&CheckoutEntity::from(&checkout)).await?;
        let _ = self.events_publisher.publish(&DomainEvent::added(
            "book_checkout", "checkout", checkout.checkout_id.as_str(), &HashMap::new(), &checkout.clone())?).await?;
        Ok(checkout)
    }

The events_publisher publishes the domain events upon checking out a book. Similar events are published for other user-actions or domain changes.

7.5.4 Domain Services

The domain services define high-level business logic and each bounded context defines a layer for domain services such as:

pub(crate) trait CatalogService: Sync + Send {
    async fn add_book(&self, book: &BookDto) -> LibraryResult<BookDto>;
    async fn remove_book(&self, id: &str) -> LibraryResult<()>;
    async fn update_book(&self, book: &BookDto) -> LibraryResult<BookDto>;
    async fn find_book_by_id(&self, id: &str) -> LibraryResult<BookDto>;
    async fn find_book_by_isbn(&self, isbn: &str) -> LibraryResult<Vec<BookDto>>;
}
pub(crate) trait CheckoutService: Sync + Send {
    async fn checkout(&self, patron_id: &str, book_id: &str) -> LibraryResult<CheckoutDto>;
    async fn returned(&self, patron_id: &str, book_id: &str) -> LibraryResult<CheckoutDto>;
    async fn query_overdue(&self, predicate: &HashMap<String, String>,
                           page: Option<&str>, page_size: usize) -> LibraryResult<PaginatedResult<CheckoutDto>>;
}
pub(crate) trait HoldService: Sync + Send {
    async fn hold(&self, patron_id: &str, book_id: &str) -> LibraryResult<HoldDto>;
    async fn cancel(&self, patron_id: &str, book_id: &str) -> LibraryResult<HoldDto>;
    async fn checkout(&self, patron_id: &str, book_id: &str) -> LibraryResult<HoldDto>;
    async fn query_expired(&self, predicate: &HashMap<String, String>,
                           page: Option<&str>, page_size: usize) -> LibraryResult<PaginatedResult<HoldDto>>;
}

7.5.5 Repositories

The library management application uses repository pattern to persist or query data, which can be implemented based on any supported implementation (such as DynamoDB). Also, it uses polymorphic associations for managing inheritance, e.g. parties DynamoDB table can store patrons, administrators and library branches. The repository implementation can be pointed to a local DynamoDB or AWS managed DynamoDB service, e.g.,

#[async_trait]
impl Repository<BookEntity> for DDBBookRepository {
    async fn create(&self, entity: &BookEntity) -> LibraryResult<usize> {
        let table_name: &str = self.table_name.as_ref();
        let val = serde_json::to_value(entity)?;
        self.client
            .put_item()
            .table_name(table_name)
            .condition_expression("attribute_not_exists(book_id)")
            .set_item(Some(parse_item(val)?))
            .send()
            .await.map(|_| 1).map_err(LibraryError::from)
    }

    async fn update(&self, entity: &BookEntity) -> LibraryResult<usize> {
        let now = Utc::now().naive_utc();
        let table_name: &str = self.table_name.as_ref();

        self.client
            .update_item()
            .table_name(table_name)
            .key("book_id", AttributeValue::S(entity.book_id.clone()))
            .update_expression("SET version = :version, title = :title, book_status = :book_status, dewey_decimal_id = :dewey_decimal_id, restricted = :restricted, updated_at = :updated_at")
            .expression_attribute_values(":old_version", AttributeValue::N(entity.version.to_string()))
            .expression_attribute_values(":version", AttributeValue::N((entity.version + 1).to_string()))
            .expression_attribute_values(":title", AttributeValue::S(entity.title.to_string()))
            .expression_attribute_values(":book_status", AttributeValue::S(entity.book_status.to_string()))
            .expression_attribute_values(":restricted", AttributeValue::Bool(entity.restricted))
            .expression_attribute_values(":dewey_decimal_id", AttributeValue::S(entity.dewey_decimal_id.to_string()))
            .expression_attribute_values(":updated_at", string_date(now))
            .condition_expression("attribute_exists(version) AND version = :old_version")
            .send()
            .await.map(|_| 1).map_err(LibraryError::from)
    }
  ...
}

7.5.6 Factories

The library management application uses factories to create instance of repositories, event publishers and services based on different implementations, e.g.,

pub(crate) async fn create_checkout_repository(store: RepositoryStore) -> Box<dyn CheckoutRepository> {
    match store {
        RepositoryStore::DynamoDB => {
            let client = build_db_client(store).await;
            Box::new(DDBCheckoutRepository::new(client, "checkout", "checkout_ndx"))
        }
        RepositoryStore::LocalDynamoDB => {
            let client = build_db_client(store).await;
            let _ = create_table(&client, "checkout", "checkout_id", "checkout_status", "patron_id").await;
            Box::new(DDBCheckoutRepository::new(client, "checkout", "checkout_ndx"))
        }
    }
}

pub(crate) async fn create_checkout_service(
  config: &Configuration, store: RepositoryStore) -> Box<dyn CheckoutService> {
    let checkout_repo = factory::create_checkout_repository(store).await;
    let catalog_svc = create_catalog_service(config, store).await;
    let patron_svc = create_patron_service(config, store).await;
    let publisher = create_publisher(store.gateway_publisher()).await;
    Box::new(CheckoutServiceImpl::new(config, checkout_repo,
                                      patron_svc, catalog_svc, publisher))
}

7.5.7 Data Transfer Objects

The library management application uses immutable data transfer objects when invoking a business service, a command or a method on controller so that these objects are free of side effects and can be safely shared with other modules in concurrent environment.

7.5.8 CQRS Pattern

The library management application uses command-query separation principle to bridge application services with the domain services. Each command handles a unique behavior implemented by the high-level modules for managing patrons, book-catalog, and checkout/hold behavior, e.g.,

#[derive(Debug, Deserialize)]
pub(crate) struct CheckoutBookCommandRequest {
    patron_id: String,
    book_id: String,
}

#[derive(Debug, Serialize)]
pub(crate) struct CheckoutBookCommandResponse {
    checkout: CheckoutDto,
}

#[async_trait]
impl Command<CheckoutBookCommandRequest, CheckoutBookCommandResponse> for CheckoutBookCommand {
    async fn execute(&self, req: CheckoutBookCommandRequest) -> Result<CheckoutBookCommandResponse, CommandError> {
        self.checkout_service.checkout(req.patron_id.as_str(), req.book_id.as_str())
            .await.map_err(CommandError::from).map(CheckoutBookCommandResponse::new)
    }
}

7.5.9 Application Services/Controller

The application services/controller layer defines remote APIs, which are built on top of AWS Lambda APIs, e.g.,

pub(crate) async fn checkout_book(
    State(state): State<AppState>,
    json: Json<Value>) -> Result<Json<CheckoutBookCommandResponse>, ServerError> {
    let req: CheckoutBookCommandRequest = serde_json::from_value(json.0).map_err(json_to_server_error)?;
    let svc = build_service(state).await;
    let res = CheckoutBookCommand::new(svc).execute(req).await?;
    Ok(Json(res))
}

7.5.10 Bounded Context

As, the Bounded Context defines the boundaries of the domain model, the library system is defines bounded context for managing library members (patrons), managing books (catalog), checkout and hold operations. In addition each domain also decomposes other subdomains that reflect business process within the problem space.

7.5.11 Monads and Error Handling

The library management application uses Result monad for returning results from a service, command or controller so that caller can handle errors properly. In addition, it uses Option monad is used for defining any optional data properties so that the compiler can enforce all type checking.

7.5.12 Main

The high-level modules define a main module, which instantiates the API controllers for remote invocation. The AWS Lambda requires that Rust based Lambda functions are deployed with the binary executable, which is spawned by the Lambda runtime, e.g.,

#[tokio::main]
async fn main() -> Result<(), Error> {
    setup_tracing();

    let state = if DEV_MODE {
        std::env::set_var("AWS_LAMBDA_FUNCTION_NAME", "_");
        std::env::set_var("AWS_LAMBDA_FUNCTION_MEMORY_SIZE", "4096"); // 200MB
        std::env::set_var("AWS_LAMBDA_FUNCTION_VERSION", "1");
        std::env::set_var("AWS_LAMBDA_RUNTIME_API", "http://[::]:9000/.rt");
        AppState::new("dev", RepositoryStore::LocalDynamoDB)
    } else {
        AppState::new("prod", RepositoryStore::DynamoDB)
    };

    let app = Router::new()
        .route("/catalog", post(controller::add_book))
        .route("/catalog/:id",
               get(controller::find_book_by_id).delete(controller::remove_book))
        .with_state(state);

    run(app).await
}

7.6 Code structure

Following tree structure shows the module and code structure for the library management application:

|--- books
|   |--- domain
|   |   |--- model.rs
|   |--- domain.rs
|   |--- dto.rs
|   |--- factory.rs
|   |--- repository
|   |   |--- ddb_book_repository.rs
|   |--- repository.rs
|--- books.rs
|--- catalog
|   |--- bin
|   |   |--- main.rs
|   |--- command
|   |   |--- add_book_cmd.rs
|   |   |--- get_book_cmd.rs
|   |   |--- remove_book_cmd.rs
|   |   |--- update_book_cmd.rs
|   |--- command.rs
|   |--- controller.rs
|   |--- domain
|   |   |--- service.rs
|   |--- domain.rs
|   |--- dto.rs
|   |--- factory.rs
|--- catalog.rs
|--- checkout
|   |--- bin
|   |   |--- main.rs
|   |--- command
|   |   |--- checkout_book_cmd.rs
|   |   |--- return_book_cmd.rs
|   |--- command.rs
|   |--- controller.rs
|   |--- domain
|   |   |--- model.rs
|   |   |--- service.rs
|   |--- domain.rs
|   |--- dto.rs
|   |--- factory.rs
|   |--- repository
|   |   |--- ddb_checkout_repository.rs
|   |--- repository.rs
|--- checkout.rs
|--- core
|   |--- command.rs
|   |--- controller.rs
|   |--- domain.rs
|   |--- events.rs
|   |--- library.rs
|   |--- repository.rs
|--- core.rs
|--- gateway
|   |--- ddb
|   |   |--- publisher.rs
|   |--- ddb.rs
|   |--- events.rs
|   |--- factory.rs
|   |--- logs.rs
|   |--- sns
|   |   |--- publisher.rs
|   |--- sns.rs
|--- gateway.rs
|--- hold
|   |--- bin
|   |   |--- main.rs
|   |--- command
|   |   |--- cancel_hold_book_cmd.rs
|   |   |--- checkout_hold_book_cmd.rs
|   |   |--- hold_book_cmd.rs
|   |--- command.rs
|   |--- controller.rs
|   |--- domain
|   |   |--- model.rs
|   |   |--- service.rs
|   |--- domain.rs
|   |--- dto.rs
|   |--- events.rs
|   |--- factory.rs
|   |--- repository
|   |   |--- ddb_hold_repository.rs
|   |--- repository.rs
|--- hold.rs
|--- lib.rs
|--- library.rs
|--- main.rs
|--- parties
|   |--- domain
|   |   |--- model.rs
|   |--- domain.rs
|   |--- events.rs
|   |--- factory.rs
|   |--- repository
|   |   |--- ddb_party_repository.rs
|   |--- repository.rs
|--- parties.rs
|--- patrons
|   |--- bin
|   |   |--- main.rs
|   |--- command
|   |   |--- add_patron_cmd.rs
|   |   |--- get_patron_cmd.rs
|   |   |--- remove_patron_cmd.rs
|   |   |--- update_patron_cmd.rs
|   |--- command.rs
|   |--- controller.rs
|   |--- domain
|   |   |--- service.rs
|   |--- domain.rs
|   |--- dto.rs
|   |--- factory.rs
|--- patrons.rs
|--- utils
|   |--- date.rs
|   |--- ddb.rs
|--- utils.rs

7.7 Building and Testing

7.7.1 Start locally

docker-compose -f ddb-docker-compose.yaml up

7.7.2 Start Lambda locally

cargo lambda watch

7.7.3 Build

cargo build --release
cargo lambda build --release

7.7.4 Testing catalog Lambdas

Add a book:

curl -H "Content-Type: application/json" http://localhost:9000/catalog -d '{"isbn": "123", "title": "my book"}'

which would return something like:

{
  "book": {
    "dewey_decimal_id": "749",
    "book_id": "a2b25506-2948-47bb-9c4a-cf9ad480c10b",
    "version": 0,
    "author_id": "623a01ca-8ba9-41cd-b8b6-85a5711f8453",
    "publisher_id": "f0cff296-9f6e-4b25-95e1-a783661bf91f",
    "language": "en",
    "isbn": "123",
    "title": "my book",
    "book_status": "Available",
    "restricted": false,
    "published_at": "2023-05-09T20:55:56.073008+00:00",
    "created_at": "2023-05-09T20:55:56.073027+00:00",
    "updated_at": "2023-05-09T20:55:56.073027+00:00"
  }
}

Finding the book by id:

curl -H "Content-Type: application/json" http://localhost:9000/catalog/f58ef32a-6f24-4314-8782-c7ebcad0ab59

that returns

{
  "book": {
    "dewey_decimal_id": "220",
    "book_id": "f58ef32a-6f24-4314-8782-c7ebcad0ab59",
    "version": 0,
    "author_id": "4c24b180-a146-410a-b68c-9d83c57adebc",
    "publisher_id": "88b47029-cad1-443f-8d67-aaf13863e924",
    "language": "en",
    "isbn": "123",
    "title": "my book",
    "book_status": "Available",
    "restricted": false,
    "published_at": "2023-05-09T22:18:25.436359+00:00",
    "created_at": "2023-05-09T22:18:25.436366+00:00",
    "updated_at": "2023-05-09T22:18:25.436371+00:00"
  }
}

7.7.5 Testing patrons Lambdas

Add a patron:

curl -v  -H "Content-Type: application/json" http://localhost:9000/patrons -d '{"email": "test-email@xyz.com"}'

that returns:

{
  "patron": {
    "patron_id": "cf49007e-e7fa-42c3-ac56-e15b9530597e",
    "version": 0,
    "first_name": "",
    "last_name": "",
    "email": "test-email@xyz.com",
    "under_13": false,
    "group_roles": [],
    "num_holds": 0,
    "num_overdue": 0,
    "home_phone": null,
    "cell_phone": null,
    "work_phone": null,
    "street_address": null,
    "city": null,
    "zip_code": null,
    "state": null,
    "country": null,
    "created_at": "2023-05-09T22:20:28.898831",
    "updated_at": "2023-05-09T22:20:28.898833"
  }
}

Getting patron:

curl -H "Content-Type: application/json" http://localhost:9000/patrons/cf49007e-e7fa-42c3-ac56-e15b9530597e|jq '.'

that returns:

{
  "patron": {
    "patron_id": "cf49007e-e7fa-42c3-ac56-e15b9530597e",
    "version": 0,
    "first_name": "",
    "last_name": "",
    "email": "test-email@xyz.com",
    "under_13": false,
    "group_roles": [],
    "num_holds": 0,
    "num_overdue": 0,
    "home_phone": "",
    "cell_phone": "",
    "work_phone": "",
    "street_address": null,
    "city": null,
    "zip_code": null,
    "state": null,
    "country": null,
    "created_at": "2023-05-09T22:21:35.142750",
    "updated_at": "2023-05-09T22:21:35.142757"
  }
}

7.7.6 Checkout book Lambda

Checkout a book:

curl -v  -H "Content-Type: application/json" http://localhost:9000/checkout -d '{"patron_id": "cf49007e-e7fa-42c3-ac56-e15b9530597e", "book_id": "f58ef32a-6f24-4314-8782-c7ebcad0ab59"}'

that returns:

{
  "checkout": {
    "checkout_id": "4a7ea5c5-939d-4934-8715-071c7ab5bc71",
    "version": 0,
    "branch_id": "dev",
    "book_id": "f58ef32a-6f24-4314-8782-c7ebcad0ab59",
    "patron_id": "cf49007e-e7fa-42c3-ac56-e15b9530597e",
    "checkout_status": "CheckedOut",
    "checkout_at": "2023-05-09T22:36:55.162807+00:00",
    "due_at": "2023-05-24T22:36:55.162808+00:00",
    "returned_at": null,
    "created_at": "2023-05-09T22:36:55.162812+00:00",
    "updated_at": "2023-05-09T22:36:55.162812+00:00"
  }
}

Returning a book:

curl -v  -H "Content-Type: application/json" http://localhost:9000/checkout/return -d '{"patron_id": "cf49007e-e7fa-42c3-ac56-e15b9530597e", "book_id": "f58ef32a-6f24-4314-8782-c7ebcad0ab59"}'

that returns:

{
  "checkout": {
    "checkout_id": "6b432212-8136-45a5-a8c4-953da73ee24f",
    "version": 0,
    "branch_id": "dev",
    "book_id": "f58ef32a-6f24-4314-8782-c7ebcad0ab59",
    "patron_id": "cf49007e-e7fa-42c3-ac56-e15b9530597e",
    "checkout_status": "Returned",
    "checkout_at": "1970-01-01T00:00:00+00:00",
    "due_at": "2023-05-09T22:36:59.145408+00:00",
    "returned_at": "2023-05-09T22:36:59.145607",
    "created_at": "2023-05-09T22:36:59.145415+00:00",
    "updated_at": "2023-05-09T22:36:59.145421+00:00"
  }
}

7.7.7 Hold book Lambda

Hold a book:

curl -v  -H "Content-Type: application/json" http://localhost:9000/hold -d '{"patron_id": "cf49007e-e7fa-42c3-ac56-e15b9530597e", "book_id": "f58ef32a-6f24-4314-8782-c7ebcad0ab59"}'

that returns

{
  "hold": {
    "hold_id": "b6cbff12-fe0b-4be0-9566-5e221e52c8c5",
    "version": 0,
    "branch_id": "dev",
    "book_id": "f58ef32a-6f24-4314-8782-c7ebcad0ab59",
    "patron_id": "cf49007e-e7fa-42c3-ac56-e15b9530597e",
    "hold_status": "OnHold",
    "hold_at": "2023-05-09T22:38:52.905822+00:00",
    "expires_at": "2023-05-24T22:38:52.905822+00:00",
    "canceled_at": null,
    "checked_out_at": null,
    "created_at": "2023-05-09T22:38:52.905825+00:00",
    "updated_at": "2023-05-09T22:38:52.905825+00:00"
  }
}

Canceling a hold:

curl -v  -H "Content-Type: application/json" http://localhost:9000/hold/cancel -d '{"patron_id": "cf49007e-e7fa-42c3-ac56-e15b9530597e", "book_id": "f58ef32a-6f24-4314-8782-c7ebcad0ab59"}'
that returns:
{
  "hold": {
    "hold_id": "b6cbff12-fe0b-4be0-9566-5e221e52c8c5",
    "version": 0,
    "branch_id": "dev",
    "book_id": "f58ef32a-6f24-4314-8782-c7ebcad0ab59",
    "patron_id": "cf49007e-e7fa-42c3-ac56-e15b9530597e",
    "hold_status": "Canceled",
    "hold_at": "2023-05-09T22:39:51.920045+00:00",
    "expires_at": "2023-05-09T22:39:51.920052+00:00",
    "canceled_at": "2023-05-09T22:39:51.920078",
    "checked_out_at": null,
    "created_at": "2023-05-09T22:39:51.920058+00:00",
    "updated_at": "2023-05-09T22:39:51.920063+00:00"
  }
}

Checking out a hold book:

curl -v  -H "Content-Type: application/json" http://localhost:9000/hold/checkout -d '{"patron_id": "cf49007e-e7fa-42c3-ac56-e15b9530597e", "book_id": "f58ef32a-6f24-4314-8782-c7ebcad0ab59"}'

that returns:

{
  "hold": {
    "hold_id": "f5fdb835-5ea2-428d-af12-a81ffb1b3f35",
    "version": 0,
    "branch_id": "dev",
    "book_id": "f58ef32a-6f24-4314-8782-c7ebcad0ab59",
    "patron_id": "cf49007e-e7fa-42c3-ac56-e15b9530597e",
    "hold_status": "CheckedOut",
    "hold_at": "2023-05-09T22:40:54.705417+00:00",
    "expires_at": "2023-05-09T22:40:54.705424+00:00",
    "canceled_at": null,
    "checked_out_at": "2023-05-09T22:40:54.705443",
    "created_at": "2023-05-09T22:40:54.705430+00:00",
    "updated_at": "2023-05-09T22:40:54.705435+00:00"
  }
}

8. Deployment and Infrastructure as a Code

In order to fully automate deployment, the library management system uses AWS CDK to build Dynamo DB tables, CloudWatch and AWS Lambda functions along with other security policies. You can deploy the infrastructure as follows:

8.1 Install CDK

npm install -g typescript
npm install aws-cdk-lib
npm install -g aws-cdk

8.2 Deploy

cd cdk
cdk deploy

8.3 Destroy

If you need to remove all infrastructure, simply run:

cd cdk
cdk destroy

9. Summary

A sample library management system demonstrates how to apply domain driven design and hexagonal/clean architecture to build microservices. It is implemented in Rust and uses AWS Dynamo DB, AWS SNS, AWS CloudWatch and AWS Lambda to build modern microservices. The sample domain-driven application also uses AWS CDK to manage infrastructure as a code so that you can deploy services consistently across all environments. You can download the sample application from https://github.com/bhatti/ddd-sample-microservice.

PS: The library management system is a sample application to showcase the domain-driven and hexagonal/clean architecture but you can read Building a Secured Family-friendly Password Manager and Building a Hybrid Authorization System for Granular Access Control for learning these concepts on a bit larger open source applications available at https://github.com/bhatti/PlexPass and https://github.com/bhatti/PlexAuthZ.

March 26, 2023

Elegant Implementation Patterns

Filed under: Computing,Languages — Tags: — admin @ 7:47 pm

Patterns are typical solutions to common problems in various phases of the software development lifecycle and you may find many books and resources on various types of patterns such as:

However, you may also find low-level implementation patterns that developers often apply to common coding problems. Following are a few of these coding patterns that I have found particularly fascinating and pragmatic in my experience:

Functional Options Pattern

The options pattern is used to pass configuration options to a method, e.g. you can pass a struct that holds configuration settings. These configuration properties may have default values that are used when they are not explicitly defined. You may implement options patter using builder pattern to initialize config properties but it requires explicitly building the configuration options even when there is nothing to override. In addition, error handling with the builder pattern poses more complexity when chaining methods. The functional options pattern on the other hand defines each configuration option as a function (referenced in Functional Options in Go and 100 Go Mistakes and How to Avoid), which can validate the configuration option and return an error for invalid data. For example:

type options struct {
  port *int
  timeout *time.Duration
}
type Option func(opts *options) error

func WithPort(port int) Option {
  return func(opts *options) error {
    if port < 0 {
      return errors.New("port shold be positive")
    }
    options.port = &port
    return nil
  }
}

func NewServer(addr string, opts ...Option) (*http.Server, error) {
  var options options
  for _, opt := range opts {
    err := opt(&options)
    if err != nl {
      return nil, err
    }
  }
  var port int
  if options.port == nil {
    port = defaultHTTPPort
  } else {
    if *options.port == 0 {
      port = randomPort()
    } else {
      port = *options.port
    }
  }
  ...
}

You can then pass configuration options as follows:

server, err := NewServer(
  WithPort(8080),
  WithTimeout(time.Second)
  )  

Above solution allows handling errors when overriding default values fails and passing empty list of options. In other languages where errors can be implicitly passed to the calling code may still use builder pattern for configuration such as:

const DefaultHttpPort: i32 = 8080;

#[derive(Debug, PartialEq)]
struct Options {
   port: i32,
   timeout: i32,
}

#[derive(Debug)]
pub enum OptionsError {
    Validation(String),
}

struct OptionsBuilder {
   port: Option<i32>,
   timeout: Option<i32>,
}

impl OptionsBuilder {
    fn new() -> Self {
        OptionsBuilder {
            port: Some(DefaultHttpPort),
            timeout: Some(1000),
        }
    }

    pub fn with_port(mut self, port: i32) -> Result<Self, OptionsError> {
        if port < 0 {
            return Err(OptionsError::Validation("port shold be positive".to_string()));
        }
        if port == 0 {
           self.port = Some(randomPort());
        } else {
           self.port = Some(port);
        }
        Ok(self)
    }

    pub fn with_timeout(mut self, timeout: i32) -> Result<Self, OptionsError> {
        if timeout <= 0 {
            return Err(OptionsError::Validation("timeout shold be positive".to_string()));
        }
        self.timeout = Some(timeout);
        Ok(self)
    }

    pub fn build(self) -> Options {
        Options { port: self.port.unwrap(), timeout: self.port.unwrap() }
    }
}

fn new_server(addr: &str, opts: Options) -> Result<Server, OptionsError> {
    Ok(Server::new(addr, opts.port, opts.timeout))
}

fn main() -> Result<(), OptionsError> {
    let _ = new_server("127.0.0.1", OptionsBuilder::new().build());
    let _ = new_server("127.0.0.1", OptionsBuilder::new().with_port(8000)?.with_timeout(2000)?.build());
    Ok(())
}

However, above solution still requires building config options even when no properties are overridden.

State Pattern with Enum

The state pattern is part of GoF design patterns, which is used to implement finite-state machines or strategy pattern. The state pattern can be easily implemented using “sum” (alternative) algebraic data types with use of union or enums constructs. For example, here is an implementation of state pattern in Rust:

use std::{error::Error, fmt};

#[derive(Debug)]
struct JobError {
    reason: String,
}

impl Error for JobError {}

impl fmt::Display for JobError {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        write!(f, "state error {}", self.reason)
    }
}

enum JobState {
    Pending { timeout: std::time::Duration },
    Executing { percentage_completed: f32 },
    Completed { completion_time: std::time::Duration },
    Failed { cause: JobError },
}

struct JobStateMachine {
    state: JobState,
}

impl JobStateMachine {
    fn new(timeout: std::time::Duration) -> Self {
        JobStateMachine {
            state: JobState::Pending { timeout }
        }
    }
    fn to_executing(&mut self) {
        self.state = match self.state {
            JobState::Pending { .. } => JobState::Executing { percentage_completed: 0.0 },
            _ => panic!("Invalid state transition!"),
        }
    }
    fn to_succeeded(&mut self, completion_time: std::time::Duration) {
        self.state = match self.state {
            JobState::Executing { .. } => JobState::Completed { completion_time: completion_time },
            _ => panic!("Invalid state transition!"),
        }
    }
    // ...
}

fn main() {
    let mut job_state_machine = JobStateMachine::new(std::time::Duration::new(1000, 0));
    job_state_machine.to_executing();
}

However, above implementation relies on runtime to validate state transitions. Alternatively, you can use struct to check valid transitions at compile time, e.g.,

struct Pending {
    timeout: std::time::Duration,
}

impl Pending {
    fn new(timeout: std::time::Duration) -> Self {
        Pending { timeout }
    }

    fn to_executing(self) -> Executing {
        Executing::new()
    }
}

struct Executing {
    percentage_completed: f32,
}

impl Executing {
    fn new() -> Self {
        Executing { percentage_completed: 0.0 }
    }

    fn to_succeeded(self, completion_time: std::time::Duration) -> Executing {
        Executing { percentage_completed: 0.0 }
    }
}

struct Succeeded {
    completion_time: std::time::Duration,
}

impl Succeeded {
    fn new(completion_time: std::time::Duration) -> Self {
        Succeeded { completion_time }
    }
}

// ...

fn main() {
    let pending = Pending::new(std::time::Duration::new(1000, 0));
    let executing = pending.to_executing();
}

Tail Recursion with Trampolines and Thunks

The recursion uses divide and conquer to solve complex problems where a function calls itself to break down a problem into smaller problems. However, each recursion requires adding stack frame to the call stack so many functional languages converts recursive implementation into an iterative solution by eliminating tail-call where recursive call is the final action of a function. In languages that don’t support tail-call optimization, you can use thunks and trampolines to implement it. A thunk is a no-argument function that is evaluated lazily, which in turn may produce another thunk for next function call. Trampolines define a Computation data structure to return result of a computation. For example, following code illustrates an implementation of a Trampoline in Rust:

trait FnThunk {
    type Out;
    fn call(self: Box<Self>) -> Self::Out;
}

pub struct Thunk<'a, T> {
    fun: Box<dyn FnThunk<Out=T> + 'a>,
}

impl<T, F> FnThunk for F where F: FnOnce() -> T {
    type Out = T;
    fn call(self: Box<Self>) -> T { (*self)() }
}

impl<'a, T> Thunk<'a, T> {
    pub fn new(fun: impl FnOnce() -> T + 'a) -> Self {
        Self { fun: Box::new(fun) }
    }
    pub fn compute(self) -> T {
        self.fun.call()
    }
}

pub enum Computation<'a, T> {
    Done(T),
    Call(Thunk<'a, Computation<'a, T>>),
}

pub fn compute<T>(mut res: Computation<T>) -> T {
    loop {
        match res {
            Computation::Done(x) => break x,
            Computation::Call(thunk) => res = thunk.compute(),
        }
    }
}

fn factorial(n: u128) -> u128 {
    fn fac_with_acc(n: u128, acc: u128) -> Computation<'static, u128> {
        if n > 1 {
            Computation::Call(Thunk::new(move || fac_with_acc(n-1, acc * n)))
        } else {
            Computation::Done(acc)
        }
    }
    compute(fac_with_acc(n, 1))
}

fn main() {
    println!("factorial result {}", factorial(5));
}

Memoization

The memoization allows caching results of expensive function calls so that repeated invocation of the same function returns the cached results when the same input is used. It can be implemented using thunk pattern described above. For example, following implementation shows a Rust based implementation:

use std::borrow::Borrow;
use std::marker::PhantomData;
use std::ops::{Deref, DerefMut};


enum Memoized<I: 'static, O: Clone, Func: Fn(I) -> O> {
    UnInitialized(PhantomData<&'static I>, Box<Func>),
    Processed(O),
}

impl<I: 'static, O: Clone, Func: Fn(I) -> O> Memoized<I, O, Func> {
    fn new(lambda: Func) -> Memoized<I, O, Func> {
        Memoized::UnInitialized(PhantomData, Box::new(lambda))
    }
    fn fetch(&mut self, data: I) -> O {
        let (flag, val) = match self {
            &mut Memoized::Processed(ref x) => (false, x.clone()),
            &mut Memoized::UnInitialized(_, ref z) => (true, z(data))
        };
        if flag {
            *self = Memoized::Processed(val.clone());
        }
        val
    }
    fn is_initialized(&self) -> bool {
        match self {
            &Memoized::Processed(_) => true,
            _ => false
        }
    }
}

impl<I: 'static, O: Clone, Func: Fn(I) -> O> Deref for Memoized<I, O, Func> {
    type Target = O;
    fn deref(&self) -> &Self::Target {
        match self {
            &Memoized::Processed(ref x) => x,
            _ => panic!("Attempted to derefence uninitalized memoized value")
        }
    }
}

impl<I: 'static, O: Clone, Func: Fn(I) -> O> DerefMut for Memoized<I, O, Func> {
    fn deref_mut(&mut self) -> &mut Self::Target {
        //self.get()
        if self.is_initialized() {
            match self {
                &mut Memoized::Processed(ref mut x) => return x,
                _ => unreachable!()
            };
        } else {
            *self = Memoized::Processed(unsafe { std::mem::zeroed() });
            match self {
                &mut Memoized::Processed(ref mut x) => return x,
                _ => unreachable!()
            };
        }
    }
}

impl<I: 'static, O: Clone, Func: Fn(I) -> O> Borrow<O> for Memoized<I, O, Func> {
    fn borrow(&self) -> &O {
        match self {
            &Memoized::Processed(ref x) => x,
            _ => panic!("Attempted to borrow uninitalized memoized value")
        }
    }
}


enum Memoized<I: 'static, O: Clone, Func: Fn(I) -> O> {
    UnInitialized(PhantomData<&'static I>, Box<Func>),
    Processed(O),
}

impl<I: 'static, O: Clone, Func: Fn(I) -> O> Memoized<I, O, Func> {
    fn new(lambda: Func) -> Memoized<I, O, Func> {
        Memoized::UnInitialized(PhantomData, Box::new(lambda))
    }
    fn fetch(&mut self, data: I) -> O {
        let (flag, val) = match self {
            &mut Memoized::Processed(ref x) => (false, x.clone()),
            &mut Memoized::UnInitialized(_, ref z) => (true, z(data))
        };
        if flag {
            *self = Memoized::Processed(val.clone());
        }
        val
    }
    fn is_initialized(&self) -> bool {
        match self {
            &Memoized::Processed(_) => true,
            _ => false
        }
    }
}

impl<I: 'static, O: Clone, Func: Fn(I) -> O> Deref for Memoized<I, O, Func> {
    type Target = O;
    fn deref(&self) -> &Self::Target {
        match self {
            &Memoized::Processed(ref x) => x,
            _ => panic!("Attempted to derefence uninitalized memoized value")
        }
    }
}

impl<I: 'static, O: Clone, Func: Fn(I) -> O> DerefMut for Memoized<I, O, Func> {
    fn deref_mut(&mut self) -> &mut Self::Target {
        //self.get()
        if self.is_initialized() {
            match self {
                &mut Memoized::Processed(ref mut x) => return x,
                _ => unreachable!()
            };
        } else {
            *self = Memoized::Processed(unsafe { std::mem::zeroed() });
            match self {
                &mut Memoized::Processed(ref mut x) => return x,
                _ => unreachable!()
            };
        }
    }
}

impl<I: 'static, O: Clone, Func: Fn(I) -> O> Borrow<O> for Memoized<I, O, Func> {
    fn borrow(&self) -> &O {
        match self {
            &Memoized::Processed(ref x) => x,
            _ => panic!("Attempted to borrow uninitalized memoized value")
        }
    }
}


mod test {
    use super::Memoized;

    #[test]
    fn test_memoized() {
        let lambda = |x: i32| -> String {
            x.to_string()
        };
        let mut dut = Memoized::new(lambda);
        assert_eq!(dut.is_initialized(), false);
        assert_eq!(&dut.fetch(5), "5");
        assert_eq!(dut.is_initialized(), true);
        assert_eq!(&dut.fetch(2000), "5");
        let x: &str = &dut;
        assert_eq!(x, "5");
    }
}

Type Conversion

The type conversion allows converting an object from one type to another, e.g. following interface defined in Spring shows an example:

public interface Converter<S, T> {
	@Nullable
	T convert(S source);
	default <U> Converter<S, U> andThen(Converter<? super T, ? extends U> after) {
		Assert.notNull(after, "'after' Converter must not be null");
		return (S s) -> {
			T initialResult = convert(s);
			return (initialResult != null ? after.convert(initialResult) : null);
		};
	}
}

This kind of type conversion looks very similar to the map/reduce primitives defined in functional programming languages, e.g. Java 8 added Function interface for such as transformation. In addition, Scala also supports implicit conversion from one type to another, e.g.,

object Conversions:
  given fromStringToUser: Conversion[String, User] = (name: String) => User(name)

Rust also supports From and Into traits for converting types, e.g.,

use std::convert::From;

#[derive(Debug)]
struct Number {
    value: i32,
}

impl From<i32> for Number {
    fn from(item: i32) -> Self {
        Number { value: item }
    }
}

fn main() {
    let num1: Number = Number::from(10);
    let num2: Number = 20.into();
    
    println!("{:?} {:?}", num1, num2);
}
« Newer PostsOlder Posts »

Powered by WordPress