Shahzad Bhatti Welcome to my ramblings and rants!

August 26, 2023

Modern Software Development Process

Filed under: Methodologies,Project Management,Technology — admin @ 5:20 pm

The technologies, methodologies, and paradigms that have shaped the way software is designed, built, and delivered have evolved from the Waterfall model, proposed in 1970 by Dr. Winston W. Royce to Agile model in early 2000. The Waterfall model was characterized by sequential and rigid phases, detailed documentation, and milestone-based progression and suffered from inflexibility to adapt to changes, longer time-to-market, high risk and uncertainty difficulty in accurately estimating costs and time, silos between teams, and late discovery of issues. The Agile model addresses these shortcomings by adopting iterative and incremental development, continuous delivery, feedback loops, lower risk, and cross-functional teams. In addition, modern software development process over the last few years have adopted DevOps integration, automated testing, microservices and modular architecture, cloud services, infrastructure as code, and data driven decisions to improve the time to market, customer satisfaction, cost efficiency, operational resilience, collaboration, and sustainable development.

Modern Software Development Process

1. Goals

The goals of modern software development process includes:

  1. Speed and Agility: Accelerate the delivery of high-quality software products and updates to respond to market demands and user needs.
  2. Quality and Reliability: Ensure that the software meets specified requirements, performs reliably under all conditions, and is maintainable and scalable.
  3. Reduced Complexity: Simplify complex tasks with use of refactoring and microservices.
  4. Customer-Centricity: Develop features and products that solve real problems for users and meet customer expectations for usability and experience.
  5. Scalability: Design software that can grow and adapt to changing user needs and technological advancements.
  6. Cost-Efficiency: Optimize resource utilization and operational costs through automation and efficient practices.
  7. Security: Ensure the integrity, availability, and confidentiality of data and services.
  8. Innovation: Encourage a culture of innovation and learning, allowing teams to experiment, iterate, and adapt.
  9. Collaboration and Transparency: Foster a collaborative environment among cross-functional teams and maintain transparency with stakeholders.
  10. Employee Satisfaction: Focus on collaboration, autonomy, and mastery can lead to more satisfied, motivated teams.
  11. Learning and Growth: Allows teams to reflect on their wins and losses regularly, fostering a culture of continuous improvement.
  12. Transparency: Promote transparency with regular meetings like daily stand-ups and sprint reviews.
  13. Flexibility: Adapt to changes even late in the development process.
  14. Compliance and Governance: Ensure that software meets industry-specific regulations and standards.
  15. Risk Mitigation: Make it easier to course-correct, reducing the risk of late project failure.
  16. Competitive Advantage: Enables companies to adapt to market changes more swiftly, offering a competitive edge.

2. Elements of Software Development Process

Key elements of the Software Development Cycle include:

  1. Discovery & Ideation: Involves brainstorming, feasibility studies, and stakeholder buy-in. Lean startup methodologies might be applied for new products.
  2. Continuous Planning: Agile roadmaps, iterative sprint planning, and backlog refinement are continuous activities.
  3. User-Centric Design: The design process is iterative, human-centered, and closely aligned with development.
  4. Development & Testing: Emphasizes automated testing, code reviews, and incremental development. CI/CD pipelines automate much of the build and testing process.
  5. Deployment & Monitoring: DevOps practices facilitate automated, reliable deployments. Real-time monitoring tools and AIOps (Artificial Intelligence for IT Operations) can proactively manage system health.
  6. Iterative Feedback Loops: Customer feedback, analytics, and KPIs inform future development cycles.

3. Roles in Modern Processes

Critical Roles in Contemporary Software Development include:

  1. Product Owner/Product Manager: This role serves as the liaison between the business and technical teams. They are responsible for defining the product roadmap, prioritizing features, and ensuring that the project aligns with stakeholder needs and business objectives.
  2. Scrum Master/Agile Coach: These professionals are responsible for facilitating Agile methodologies within the team. They help organize and run sprint planning sessions, stand-ups, and retrospectives.
  3. Development Team: This group is made up of software engineers and developers responsible for the design, coding, and initial testing of the product. They collaborate closely with other roles, especially the Product Owner, to ensure that the delivered features meet the defined requirements.
  4. DevOps Engineers: DevOps Engineers act as the bridge between the development and operations teams. They focus on automating the Continuous Integration/Continuous Deployment (CI/CD) pipeline to ensure that code can be safely, efficiently, and reliably moved from development into production.
  5. QA Engineers: Quality Assurance Engineers play a vital role in the software development life cycle by creating automated tests that are integrated into the CI/CD pipeline.
  6. Data Scientists: These individuals use analytical techniques to draw actionable insights from data generated by the application. They may look at user behavior, application performance, or other metrics to provide valuable feedback that can influence future development cycles or business decisions.
  7. Security Engineers: Security Engineers are tasked with ensuring that the application is secure from various types of threats. They are involved from the early stages of development to ensure that security is integrated into the design (“Security by Design”).

Each of these roles plays a critical part in modern software development, contributing to more efficient processes, higher-quality output, and ultimately, greater business value.

4. Phases of Modern SDLC

In the Agile approach to the software development lifecycle, multiple phases are organized not in a rigid, sequential manner as seen in the Waterfall model, but as part of a more fluid, iterative process. This allows for continuous feedback loops and enables incremental delivery of features. Below is an overview of the various phases that constitute the modern Agile-based software development lifecycle:

4.1 Inception Phase

The Inception phase is the initial stage in a software development project, often seen in frameworks like the Rational Unified Process (RUP) or even in Agile methodologies in a less formal manner. It sets the foundation for the entire project by defining its scope, goals, constraints, and overall vision.

4.1.1 Objectives

The objectives of inception phase includes:

  1. Project Vision and Scope: Define what the project is about, its boundaries, and what it aims to achieve.
  2. Feasibility Study: Assess whether the project is feasible in terms of time, budget, and technical constraints.
  3. Stakeholder Identification: Identify all the people or organizations who have an interest in the project, such as clients, end-users, developers, and more.
  4. Risk Assessment: Evaluate potential risks, like technical challenges, resource limitations, and market risks, and consider how they can be mitigated.
  5. Resource Planning: Preliminary estimation of the resources (time, human capital, budget) required for the project.
  6. Initial Requirements Gathering: Collect high-level requirements, often expressed as epics or user stories, which will be refined later.
  7. Project Roadmap: Create a high-level project roadmap that outlines major milestones and timelines.

4.1.2 Practices

The inception phase includes following practices:

  • Epics Creation: High-level functionalities and features are described as epics.
  • Feasibility Study: Analyze the technical and financial feasibility of the proposed project.
  • Requirements: Identify customers and document requirements that were captured from the customers. Review requirements with all stakeholders and get alignment on key features and deliverables. Be careful for a scope creep so that the software change can be delivered incrementally.
  • Estimated Effort: A high-level effort in terms of estimated man-month, t-shirt size, or story-points to build and deliver the features.
  • Business Metrics: Define key business metrics that will be tracked such as adoption, utilization and success criteria for the features.
  • Customer and Dependencies Impact: Understand how the changes will impact customers as well as upstream/downstream dependencies.
  • Roadmapping: Develop a preliminary roadmap to guide the project’s progression.

4.1.3 Roles and Responsibilities:

The roles and responsibilities in inception phase include:

  • Stakeholders: Validate initial ideas and provide necessary approvals or feedback.
  • Product Owner: Responsible for defining the vision and scope of the product, expressed through a set of epics and an initial product backlog. The product manager interacts with customers, defines the customer problems and works with other stakeholders to find solutions. The product manager defines end-to-end customer experience and how the customer will benefit from the proposed solution.
  • Scrum Master: Facilitates the Inception meetings and ensures that all team members understand the project’s objectives and scope. The Scrum master helps write use-cases, user-stories and functional/non-functional requirements into the Sprint Backlog.
  • Development Team: Provide feedback on technical feasibility and initial estimates for the project scope.

By carefully executing the Inception phase, the team can ensure that everyone has a clear understanding of the ‘what,’ ‘why,’ and ‘how’ of the project, thereby setting the stage for a more organized and focused development process.

4.2. Planning Phase

The Planning phase is an essential stage in the Software Development Life Cycle (SDLC), especially in methodologies that adopt Agile frameworks like Scrum or Kanban. This phase aims to detail the “how” of the project, which is built upon the “what” and “why” established during the Inception phase. The Planning phase involves specifying objectives, estimating timelines, allocating resources, and setting performance measures, among other activities.

4.2.1 Objectives

The objectives of planning phase includes:

  1. Sprint Planning: Determine the scope of the next sprint, specifying what tasks will be completed.
  2. Release Planning: Define the features and components that will be developed and released in the upcoming iterations.
  3. Resource Allocation: Assign tasks and responsibilities to team members based on skills and availability.
  4. Timeline and Effort Estimation: Define the timeframe within which different tasks and sprints should be completed.
  5. Risk Management: Identify, assess, and plan for risks and contingencies.
  6. Technical Planning: Decide the architectural layout, technologies to be used, and integration points, among other technical aspects.

4.2.2 Practices:

The planning phase includes following practices:

  • Sprint Planning: A meeting where the team decides what to accomplish in the coming sprint.
  • Release Planning: High-level planning to set goals and timelines for upcoming releases.
  • Backlog Refinement: Ongoing process to add details, estimates, and order to items in the Product Backlog.
  • Effort Estimation: Techniques like story points, t-shirt sizes, or time-based estimates to assess how much work is involved in a given task or story.

4.2.3 Roles and Responsibilities:

The roles and responsibilities in planning phase include:

  • Product Owner: Prioritizes the Product Backlog and clarifies details of backlog items. Helps the team understand which items are most crucial to the project’s success.
  • Scrum Master: Facilitates planning meetings and ensures that the team has what they need to complete their tasks. Addresses any impediments that the team might face.
  • Development Team: Provides effort estimates for backlog items, helps break down stories into tasks, and commits to the amount of work they can accomplish in the next sprint.
  • Stakeholders: May provide input on feature priority or business requirements, though they typically don’t participate in the detailed planning activities.

4.2.4 Key Deliverables:

Key deliverables in planning phase include:

  • Sprint Backlog: A list of tasks and user stories that the development team commits to complete in the next sprint.
  • Execution Plan: A plan outlining the execution of the software development including dependencies from other teams. The plan also identifies key reversible and irreversible decisions.
  • Deployment and Release Plan: A high-level plan outlining the features and major components to be released over a series of iterations. This plan also includes feature flags and other configuration parameters such as throttling limits.
  • Resource Allocation Chart: A detailed breakdown of who is doing what.
  • Risk Management Plan: A documentation outlining identified risks, their severity, and mitigation strategies.

By thoroughly engaging in the Planning phase, teams can have a clear roadmap for what needs to be achieved, how it will be done, who will do it, and when it will be completed. This sets the project on a course that maximizes efficiency, minimizes risks, and aligns closely with the stakeholder requirements and business objectives.

4.3. Design Phase

The Design phase in the Software Development Life Cycle (SDLC) is a critical step that comes after planning and before coding. During this phase, the high-level architecture and detailed design of the software system are developed. This serves as a blueprint for the construction of the system, helping the development team understand how the software components will interact, what data structures will be used, and how user interfaces will be implemented, among other things.

4.3.1 Objectives:

The objectives of design phase includes:

  1. Architectural Design: Define the software’s high-level structure and interactions between components. Understand upstream and downstream dependencies and how architecture decisions affect those dependencies. The architecture will ensure scalability to a minimum of 2x of the peak traffic and will build redundancy to avoid single points of failure. The architecture decisions should be documented in an Architecture Decision Record format and should be defined in terms of reversible and irreversible decisions.
  2. Low-Level Design: Break down the architectural components into detailed design specifications including functional and non-functional considerations. The design documents should include alternative solutions and tradeoffs when selecting a recommended approach. The design document should consider how the software can be delivered incrementally and whether multiple components can be developed in parallel. Other best practices such as not breaking backward compatibility, composable modules, consistency, idempotency, pagination, no unbounded operations, validation, and purposeful design should be applied to the software design.
  3. API Design: Specify the endpoints, request/response models, and the underlying architecture for any APIs that the system will expose.
  4. User Interface Design: Design the visual elements and user experiences.
  5. Data Model Design: Define the data structures and how data will be stored, accessed, and managed.
  6. Functional Design: Elaborate on the algorithms, interactions, and methods that will implement software features.
  7. Security Design: Implement security measures like encryption, authentication, and authorization.
  8. Performance Considerations: Design for scalability, reliability, and performance optimization.
  9. Design for Testability: Ensure elements can be easily tested.
  10. Operational Excellence: Document monitoring, logging, operational and business metrics, alarms, and dashboards to monitor health of the software components.

4.3.2 Practices:

The design phase includes following practices:

  • Architectural and Low-Level Design Reviews: Conducted to validate the robustness, scalability, and performance of the design. High-level reviews focus on the system architecture, while low-level reviews dig deep into modules, classes, and interfaces.
  • API Design Review: A specialized review to ensure that the API design adheres to RESTful principles, is consistent, and meets performance and security benchmarks.
  • Accessibility Review: Review UI for accessibility support.
  • Design Patterns: Utilize recognized solutions for common design issues.
  • Wireframing and Prototyping: For UI/UX design.
  • Sprint Zero: Sometimes used in Agile for initial setup, including design activities.
  • Backlog Refinement: Further details may be added to backlog items in the form of acceptance criteria and design specs.

4.3.3 Roles and Responsibilities:

The roles and responsibilities in design phase include:

  • Product Owner: Defines the acceptance criteria for user stories and validates that the design meets business and functional requirements.
  • Scrum Master: Facilitates design discussions and ensures that design reviews are conducted effectively.
  • Development Team: Creates the high-level architecture, low-level detailed design, and API design. Chooses design patterns and data models.
  • UX/UI Designers: Responsible for user interface and user experience design.
  • System Architects: Involved in high-level design and may also review low-level designs and API specifications.
  • API Designers: Specialized role focusing on API design aspects such as endpoint definition, rate limiting, and request/response models.
  • QA Engineers: Participate in design reviews to ensure that the system is designed for testability.
  • Security Engineers: Involved in both high-level and low-level design reviews to ensure that security measures are adequately implemented.

4.3.4 Key Deliverables:

Key deliverables in design phase include:

  • High-Level Architecture Document: Describes the architectural layout and high-level components.
  • Low-Level Design Document: Provides intricate details of each module, including pseudo-code, data structures, and algorithms.
  • API Design Specification: Comprehensive documentation for the API including endpoints, methods, request/response formats, and error codes.
  • Database Design: Includes entity-relationship diagrams, schema designs, etc.
  • UI Mockups: Prototypes or wireframes of the user interface.
  • Design Review Reports: Summaries of findings from high-level, low-level, and API design reviews.
  • Proof of Concept and Spikes: Build proof of concepts or execute spikes to learn about high risk design components.
  • Architecture Decision Records: Documents key decisions that were made including tradeoffs, constraints, stakeholders and final decision.

The Design phase serves as the blueprint for the entire project. Well-thought-out and clear design documentation helps mitigate risks, provides a clear vision for the development team, and sets the stage for efficient and effective coding, testing, and maintenance.

4.4 Development Phase

The Development phase in the Software Development Life Cycle (SDLC) is the stage where the actual code for the software is written, based on the detailed designs and requirements specified in the preceding phases. This phase involves coding, unit testing, integration, and sometimes even preliminary system testing to validate that the implemented features meet the specifications.

4.4.1 Objectives:

The objectives of development phase includes:

  1. Coding: Translate design documentation and specifications into source code.
  2. Modular Development: Build the software in chunks or modules for easier management, testing, and debugging.
  3. Unit Testing: Verify that each unit or module functions as designed.
  4. Integration: Combine separate units and modules into a single, functioning system.
  5. Code Reviews: Validate that code adheres to standards, guidelines, and best practices.
  6. Test Plans: Review test plans for all test-cases that will validate the changes to the software.
  7. Documentation: Produce inline code comments, READMEs, and technical documentation.
  8. Early System Testing: Sometimes, a form of system testing is performed to catch issues early.

4.4.2 Practices:

The development phase includes following practices:

  • Pair Programming: Two developers work together at one workstation, enhancing code quality.
  • Test-Driven Development (TDD): Write failing tests before writing code, then write code to pass the tests.
  • Code Reviews: Peer reviews to maintain code quality.
  • Continuous Integration: Frequently integrate code into a shared repository and run automated tests.
  • Version Control: Use of tools like Git to manage changes and history.
  • Progress Report: Use of burndown charts, risk analysis, blockers and other data to apprise stakeholders up-to-date with the progress of the development effort.

4.4.3 Roles and Responsibilities:

The roles and responsibilities in development phase include:

  • Product Owner: Prioritizes features and stories in the backlog that are to be developed, approves or rejects completed work based on acceptance criteria.
  • Scrum Master: Facilitates daily stand-ups, removes impediments, and ensures the team is focused on the tasks at hand.
  • Software Developers: Write the code, perform unit tests, and integrate modules.
  • QA Engineers: Often involved early in the development phase to write automated test scripts, and to run unit tests along with developers.
  • DevOps Engineers: Manage the CI/CD pipeline to automate code integration and preliminary testing.
  • Technical Leads/Architects: Provide guidance and review code to ensure it aligns with architectural plans.

4.4.4 Key Deliverables:

Key deliverables in development phase include:

  • Source Code: The actual code files, usually stored in a version control system.
  • Unit Test Cases and Results: Documentation showing what unit tests have been run and their results.
  • Code Review Reports: Findings from code reviews.
  • Technical Documentation: Sometimes known as codebooks or developer guides, these detail how the code works for future maintenance.

4.4.5 Best Practices for Development:

Best practices in development phase include:

  • Clean Code: Write code that is easy to read, maintain, and extend.
  • Modularization: Build in a modular fashion for ease of testing, debugging, and scalability.
  • Commenting and Documentation: In-line comments and external documentation to aid in future maintenance and understanding.
  • Code Repositories: Use version control systems to keep a historical record of all changes and to facilitate collaboration.

The Development phase is where the design and planning turn into a tangible software product. Following best practices and guidelines during this phase can significantly impact the quality, maintainability, and scalability of the end product.

4.5. Testing Phase

The Testing phase in the Software Development Life Cycle (SDLC) is crucial for validating the system’s functionality, performance, security, and stability. This phase involves various types of testing methodologies to ensure that the software meets all the requirements and can handle expected and unexpected situations gracefully.

4.5.1 Objectives:

The objectives of testing phase includes:

  1. Validate Functionality: Ensure all features work as specified in the requirements.
  2. Ensure Quality: Confirm that the system is reliable, secure, and performs optimally.
  3. Identify Defects: Find issues that need to be addressed before the software is deployed.
  4. Risk Mitigation: Assess and mitigate potential security and performance risks.

4.5.2 Practices:

The testing phase includes following practices:

  • Test Planning: Create a comprehensive test plan detailing types of tests, scope, schedule, and responsibilities.
  • Test Case Design: Prepare test cases based on various testing requirements.
  • Test Automation: Automate repetitive and time-consuming tests.
  • Continuous Testing: Integrate testing into the CI/CD pipeline for continuous feedback.

4.5.3 Types of Testing and Responsibilities:

The test phase includes following types of testing:

  1. Integration Testing:
    Responsibilities: QA Engineers, DevOps Engineers
    Objective: Verify that different modules or services work together as expected.
  2. Functional Testing:
    Responsibilities: QA Engineers
    Objective: Validate that the system performs all the functions as described in the specifications.
  3. Load/Stress Testing:
    Responsibilities: Performance Test Engineers, DevOps Engineers
    Objective: Check how the system performs under heavy loads.
  4. Security and Penetration Testing:
    Responsibilities: Security Engineers, Ethical Hackers
    Objective: Identify vulnerabilities that an attacker could exploit.
  5. Canary Testing:
    Responsibilities: DevOps Engineers, Product Owners
    Objective: Validate that new features or updates will perform well with a subset of the production environment before full deployment.
  6. Other Forms of Testing:
    Smoke Testing: Quick tests to verify basic functionality after a new build or deployment.
    Regression Testing: Ensure new changes haven’t adversely affected existing functionalities.
    User Acceptance Testing (UAT): Final validation that the system meets business needs, usually performed by stakeholders or end-users.

4.5.4 Key Deliverables:

  • Test Plan: Describes the scope, approach, resources, and schedule for the testing activities.
  • Test Cases: Detailed description of what to test, how to test, and expected outcomes.
  • Test Scripts: Automated scripts for testing.
  • Test Reports: Summary of testing activities, findings, and results.

4.5.5 Best Practices for Testing:

  • Early Involvement: Involve testers from the early stages of SDLC for better understanding and planning.
  • Code Reviews: Conduct code reviews to catch potential issues before the testing phase.
  • Data-Driven Testing: Use different sets of data to evaluate how the application performs with various inputs.
  • Monitoring and Feedback Loops: Integrate monitoring tools to get real-time insights during testing and improve the test scenarios continuously.

The Testing phase is crucial for delivering a reliable, secure, and robust software product. The comprehensive testing strategy that includes various types of tests ensures that the software is well-vetted before it reaches the end-users.

4.6. Deployment Phase

The Deployment phase in the Software Development Life Cycle (SDLC) involves transferring the well-tested codebase from the staging environment to the production environment, making the application accessible to the end-users. This phase aims to ensure a smooth transition of the application from development to production with minimal disruptions and optimal performance.

4.6.1 Objectives:

The objectives of deployment phase includes:

  1. Release Management: Ensure that the application is packaged and released properly.
  2. Transition: Smoothly transition the application from staging to production.
  3. Scalability: Ensure the infrastructure can handle the production load.
  4. Rollback Plans: Prepare contingencies for reverting changes in case of failure.

4.6.2 Key Components and Practices:

The key components and practices in deployment phase include:

  1. Continuous Deployment (CD):
    Responsibilities: DevOps Engineers, QA Engineers, Developers
    Objective: Automatically deploy all code changes to the production environment that pass the automated testing phase.
    Tools: Jenkins, GitLab CI/CD, GitHub Actions, Spinnaker
  2. Infrastructure as Code (IaC):
    Responsibilities: DevOps Engineers, System Administrators
    Objective: Automate the provisioning and management of infrastructure using code.
    Tools: Terraform, AWS CloudFormation, Ansible
  3. Canary Testing:
    Responsibilities: DevOps Engineers, QA Engineers
    Objective: Gradually roll out the new features to a subset of users before a full-scale rollout to identify potential issues.
  4. Phased Deployment:
    Responsibilities: DevOps Engineers, Product Managers
    Objective: Deploy the application in phases to monitor its stability and performance, enabling easier troubleshooting.
  5. Rollback:
    Responsibilities: DevOps Engineers
    Objective: Be prepared to revert the application to its previous stable state in case of any issues.
  6. Feature Flags:
    Responsibilities: Developers, Product Managers
    Objective: Enable or disable features in real-time without deploying new code.
  7. Resilience:
    Responsibilities: DevOps Engineers, Developers
    Objective: Prevent retry storms, throttle the number of client requests, and apply reliability patterns such as Circuit Breakers and BulkHeads. Avoid bimodal behavior and maintain software latency and throughput within the defined SLAs.
  8. Scalability:
    Responsibilities: DevOps Engineers, Developers
    Objective: Monitor scalability limits and implement elastic scalability. Review scaling limits periodically.
  9. Throttling and Sharding Limits (and other Configurations):
    Responsibilities: DevOps Engineers, Developers
    Objective: Review configuration and performance configurations such as throttling and sharding limits.
  10. Access Policies:
    Responsibilities: DevOps Engineers, Developers
    Objective: Review access policies, permissions and roles that can access the software.

4.6.3 Roles and Responsibilities:

The deployment phase includes following roles and responsibilities:

  • DevOps Engineers: Manage CI/CD pipelines, IaC, and automate deployments.
  • Product Managers: Approve deployments based on feature completeness and business readiness.
  • QA Engineers: Ensure the application passes all tests in the staging environment before deployment.
  • System Administrators: Ensure that the infrastructure is ready and scalable for deployment.

4.6.4 Key Deliverables:

The deployment phase includes following key deliverables:

  • Deployment Checklist: A comprehensive list of tasks and checks before, during, and after deployment.
  • CD Pipeline Configuration: The setup details for the Continuous Deployment pipeline.
  • Automated Test Cases: Essential for validating the application before automatic deployment.
  • Operational, Security and Compliance Review Documents: These documents will define checklists about operational excellence, security and compliance support in the software.
  • Deployment Logs: Automated records of what was deployed, when, and by whom (usually by the automation system).
  • Logging: Review data that will be logged and respective log levels so that data privacy is not violated and excessive logging is avoided.
  • Deployment Schedule: A timeline for the deployment process.
  • Observability: Health dashboard to monitor operational and business metrics, and alarms to receive notifications when service-level-objectives (SLO) are violated. The operational metrics will include availability, latency, and error metrics. The health dashboard will monitor utilization of CPU, disk space, memory, and network resources.
  • Rollback Plan: A detailed plan for reverting to the previous stable version if necessary.

4.6.5 Best Practices:

Following are a few best practices for the deployment phase:

  • Automated Deployments: Use automation tools for deployment of code and infrastructure to minimize human error.
  • Monitoring and Alerts: Use monitoring tools to get real-time insights into application performance and set up alerts for anomalies.
  • Version Control: Ensure that all deployable artifacts are versioned for traceability.
  • Comprehensive Testing: Given the automated nature of Continuous Deployment, having a comprehensive suite of automated tests is crucial.
  • Rollback Strategy: Have an automated rollback strategy in case the new changes result in system failures or critical bugs.
  • Feature Toggles: Use feature flags to control the release of new features, which can be enabled or disabled without having to redeploy.
  • Audit Trails: Maintain logs and history for compliance and to understand what was deployed and when.
  • Documentation: Keep detailed records of the deployment process, configurations, and changes.
  • Stakeholder Communication: Keep all stakeholders in the loop regarding deployment schedules, success, or issues.
  • Feedback from Early Adopters: If the software is being released to internal or a beta customers, then capture feedback from those early adopters including any bugs report.
  • Marketing and external communication: The release may need to be coordinated with a marketing campaign so that customers can be notified about new features.

The Deployment phase is critical for ensuring that the software is reliably and securely accessible by the end-users. A well-planned and executed deployment strategy minimizes risks and disruptions, leading to a more dependable software product.

4.7 Maintenance Phase

The Maintenance phase in the Software Development Life Cycle (SDLC) is the ongoing process of ensuring the software’s continued effective operation and performance after its release to the production environment. The objective of this phase is to sustain the software in a reliable state, provide continuous support, and make iterative improvements or patches as needed.

4.7.1 Objectives:

The objectives of maintenance phase includes:

  1. Bug Fixes: Address any issues or defects that arise post-deployment.
  2. Updates & Patches: Release minor or major updates and patches to improve functionality or security.
  3. Optimization: Tune the performance of the application based on metrics and feedback.
  4. Scalability: Ensure that the software can handle growth in terms of users, data, and transaction volume.
  5. Documentation: Update all documents related to software changes and configurations.

4.7.2 Key Components and Practices:

The key components and practices of maintenance phase include:

  1. Incident Management:
    Responsibilities: Support Team, DevOps Engineers
    Objective: Handle and resolve incidents affecting the production environment.
  2. Technical Debt Management:
    Responsibilities: Development Team, Product Managers
    Objective: Prioritize and resolve accumulated technical debt to maintain code quality and performance.
  3. Security Updates:
    Responsibilities: Security Engineers, DevOps Engineers
    Objective: Regularly update and patch the software to safeguard against security vulnerabilities.
  4. Monitoring & Analytics:
    Responsibilities: DevOps Engineers, Data Analysts
    Objective: Continuously monitor software performance, availability, and usage to inform maintenance tasks.
  5. Documentation and Runbooks:
    Responsibilities: Support Team, DevOps Engineers
    Objective: Define cookbooks for development processes and operational issues.

4.7.3 Roles and Responsibilities:

The maintenance phase includes following roles and responsibilities:

  • DevOps Engineers: Monitor system health, handle deployments for updates, and coordinate with the support team for incident resolution.
  • Support Team: Provide customer support, report bugs, and assist in reproducing issues for the development team.
  • Development Team: Develop fixes, improvements, and updates based on incident reports, performance metrics, and stakeholder feedback.
  • Product Managers: Prioritize maintenance tasks based on customer needs, business objectives, and technical requirements.
  • Security Engineers: Regularly audit the software for vulnerabilities and apply necessary security patches.

4.7.4 Key Deliverables:

Key deliverables for the maintenance phase include:

  • Maintenance Plan: A detailed plan outlining maintenance activities, schedules, and responsible parties.
  • Patch Notes: Documentation describing what has been fixed or updated in each new release.
  • Performance Reports: Regular reports detailing the operational performance of the software.

4.7.5 Best Practices for Maintenance:

Following are a few best practices for the maintenance phase:

  • Automated Monitoring: Use tools like Grafana, Prometheus, or Zabbix for real-time monitoring of system health.
  • Feedback Loops: Use customer feedback and analytics to prioritize maintenance activities.
  • Version Control: Always version updates and patches for better tracking and rollback capabilities.
  • Knowledge Base: Maintain a repository of common issues and resolutions to accelerate incident management.
  • Scheduled Maintenance: Inform users about planned downtimes for updates and maintenance activities.

The Maintenance phase is crucial for ensuring that the software continues to meet user needs and operate effectively in the ever-changing technical and business landscape. Proper maintenance ensures the longevity, reliability, and continual improvement of the software product. Also, note that though maintenance is defined as a separate phase above but it will include other phases from inception to deployment and each change will be developed and deployed incrementally in iterative agile process.

5. Best Practices for Ensuring Reliability, Quality, and Incremental Delivery

Following best practices ensure incremental delivery with higher reliability and quality:

5.1. Iterative Development:

Embrace the Agile principle of delivering functional software frequently. The focus should be on breaking down the product into small, manageable pieces and improving it in regular iterations, usually two to four-week sprints.

  • Tools & Techniques: Feature decomposition, Sprint planning, Short development cycles.
  • Benefits: Faster time to market, easier bug isolation, and tracking, ability to incorporate user feedback quickly.

5.2. Automated Testing:

Implement Test-Driven Development (TDD) or Behavior-Driven Development (BDD) to script tests before the actual coding begins. Maintain these tests to run automatically every time a change is made.

  • Tools & Techniques: JUnit, Selenium, Cucumber.
  • Benefits: Instant feedback on code quality, regression testing, enhanced code reliability.

5.3. Design Review:

  • Detailed Explanation: A formal process where architects and developers evaluate high-level and low-level design documents to ensure they meet the project requirements, are scalable, and adhere to best practices.
  • Tools & Techniques: Design diagrams, UML, Peer Reviews, Design Review Checklists.
  • Benefits: Early identification of design flaws, alignment with stakeholders, consistency across the system architecture.

5.4 Code Reviews:

Before any code gets merged into the main repository, it should be rigorously reviewed by other developers to ensure it adheres to coding standards, is optimized, and is free of bugs.

  • Tools & Techniques: Git Pull Requests, Code Review Checklists, Pair Programming.
  • Benefits: Team-wide code consistency, early detection of anti-patterns, and a secondary check for overlooked issues.

5.5 Security Review:

A comprehensive evaluation of the security aspects of the application, involving both static and dynamic analyses, is conducted to identify potential vulnerabilities.

  • Tools & Techniques: OWASP Top 10, Security Scanners like Nessus or Qualys, Code Review tools like Fortify, Penetration Testing.
  • Benefits: Proactive identification of security vulnerabilities, adherence to security best practices, and compliance with legal requirements.

5.6 Operational Review:

Before deploying any new features or services, assess the readiness of the operational environment, including infrastructure, data backup, monitoring, and support plans.

  • Tools & Techniques: Infrastructure as Code tools like Terraform, Monitoring tools like Grafana, Documentation.
  • Benefits: Ensures the system is ready for production, mitigates operational risks, confirms that deployment and rollback strategies are in place.

5.7 CI/CD (Continuous Integration and Continuous Deployment):

Integrate all development work frequently and deliver changes to the end-users reliably and rapidly using automated pipelines.

  • Tools & Techniques: Jenkins, GitLab CI/CD, Docker, Kubernetes.
  • Benefits: Quicker discovery of integration bugs, reduced lead time for feature releases, and improved deployability.

5.8 Monitoring:

Implement sophisticated monitoring tools that continuously observe system performance, user activity, and operational health.

  • Tools & Techniques: Grafana, Prometheus, New Relic.
  • Benefits: Real-time data analytics, early identification of issues, and rich metrics for performance tuning.

5.9 Retrospectives:

At the end of every sprint or project phase, the team should convene to discuss what worked well, what didn’t, and how processes can be improved.

  • Tools & Techniques: Post-its for brainstorming, Sprint Retrospective templates, Voting mechanisms.
  • Benefits: Continuous process improvement, team alignment, and reflection on the project’s success and failures.

5.10 Product Backlog Management:

A live document containing all known requirements, ranked by priority and constantly refined to reflect changes and learnings.

  • Tools & Techniques: JIRA, Asana, Scrum boards.
  • Benefits: Focused development on high-impact features, adaptability to market or user needs.

5.11. Kanban for Maintenance:

For ongoing maintenance work and technical debt management, utilize a Kanban system to visualize work, limit work-in-progress, and maximize efficiency.

  • Tools & Techniques: Kanban boards, JIRA Kanban, Trello.
  • Benefits: Dynamic prioritization, quicker task completion, and efficient resource utilization.

5.12 Feature Flags:

Feature flags allow developers to toggle the availability of certain functionalities without deploying new versions.

  • Tools & Techniques: LaunchDarkly, Config files, Custom-built feature toggles.
  • Benefits: Risk mitigation during deployments, simpler rollbacks, and fine-grained control over feature releases.

5.13 Documentation:

Create comprehensive documentation, ranging from code comments and API docs to high-level architecture guides and FAQs for users.

  • Tools & Techniques: Wiki platforms, OpenAPI/Swagger for API documentation, Code comments.
  • Benefits: Streamlined onboarding for new developers, easier troubleshooting, and long-term code maintainability.

By diligently applying these multi-layered Agile best practices and clearly defined roles, your SDLC will be a well-oiled machine—more capable of rapid iterations, quality deliverables, and high adaptability to market or user changes.

6. Conclusion

The Agile process combined with modern software development practices offers an integrated and robust framework for building software that is adaptable, scalable, and high-quality. This approach is geared towards achieving excellence at every phase of the Software Development Life Cycle (SDLC), from inception to deployment and maintenance. The key benefits of this modern software development process includes flexibility and adaptability, reduced time-to-market, enhanced quality, operational efficiency, risk mitigation, continuous feedback, transparency, collaboration, cost-effectiveness, compliance and governance, and documentation for sustainment. By leveraging these Agile and modern software development practices, organizations can produce software solutions that are not only high-quality and reliable but also flexible enough to adapt to ever-changing requirements and market conditions.

May 21, 2023

Heuristics from “Code That Fits in Your Head”

Filed under: Methodologies,Technology,Uncategorized — admin @ 5:00 pm

The code maintenance and readability are important aspects of writing software systems and the “Code That Fits in Your Head” comes with a lot of practical advice for writing maintainable code. Following are a few important heuristics from the book:

1. Art or Science

In the first chapter, the author compares software development with other fields such as Civil engineering that deals with design, construction, and maintenance of components. Though, software development has these phases among others but the design and construction phases in it are intimately connected and requires continuous iteration. Another metaphor discussed in the book is thinking software development as a living organism like garden, which makes more sense as like pruning weeds in garden, you have to refactor the code base and manage technical debt. Another metaphor described in the book is software craftsmanship and software developer may progress from apprentice, journeyman to master. Though, these perspectives help but software doesn’t quite fit the art metaphor and author suggests heuristics and guidelines for programming. The author introduces software engineering that allows a structured framework for development activities.

2. Checklists

A lot of professions such as airplane pilots and doctors follow a checklist for accomplishing a complex task. You may use similar checklist for setting up a new code-base such as using Git, automating the build, enabling all compiler error messages, using linters, static analysis, etc. Though, software engineering is more than following a checklist but these measures help make small improvements.

3. Tackling Complexity

This chapter defines sustainability and quality as the purpose for the book as the software may exists for decades and it needs to sustain its organization. The software exists to provide a value, though in some cases the value may not be immediate. This means at times, worse technology succeeds because it provides faster path to the value and companies which are too focus on perfection can run out of business. Richard Gabriel coined the aphorism that worse is better. The sustainability chooses middle ground by moving in right direction with checklists and balanced software development practices. The author compares computer with human brain and though this comparison is not fair and working memory of humans is much smaller that can hold from four to seven pieces of information. This number is important when writing a code as you spend more time reading the code and a code with large number of variables or conditional logic can make it harder to understand. The author refers to the work of Daniel Kahneman who suggested model of thoughts comprising two systems: System 1 and System 2. When a programmer is in the zone or in a flow, the system 1 always active and try to understand the code. This means that writing modular code with a fewer dependencies, variables and decisions is easier to understand and maintain. The human brain can deal with limited memory and if the code handles more than seven things at once then it will lead to the complexity.

4. Vertical Slice and Walking Skeleton

This chapter recommends starting and deploying a vertical slice of the application to get to the working software. A vertical slice may consists of multiple layers but it gives an early feedback and is a working software. A number of software development methodologies such as Test-driven development, Behavioral-driven development, Domain-driven design, Type-driven development and Property-driven development help building fine-grained implementations with tests. For example, if you don’t have tests then you can use characterization test to describe the behavior of existing software. The tests generally follow Arranage-Act-Assert phases where the arrange phase prepares the test, the act phase invokes the operation under test and the assert phase verifies the actual outcome. The documentation can further explain why decisions in the code were made. The walking skeleton helps vertical slice by using acceptance-test-driven development or outside-in-test-driven development. For example, you can pick a simplest feature to implement that aims for the happy path to demonstrate that the system has a specific capability. The unit-tests will test this feature by using Fake Object, data-transfer-object (DTO) and interfaces (e.g. RepositoryInterface). The dependencies are injected into tests with this mock behavior. The real objects that are difficult tests can use a Humble Object pattern and drain the object of branching logic. Making small improvements that are continuously delivered also keep stakeholders updated so that they know when you will be done.

5. Encapsulation

The encapsulation hides details by defining a contract that describes the valid interactions between objects and callers. The parameterized tests can capture the desired behavior and assert the invariants. The incremental changes can be added using test-driven development that uses red-green-refactor where you first write a failing test, then make the test pass and then refactor to improve the code. When using a contract to capture the interactions, you can use Postel’s law to build resilient systems, i.e.,

Be conservative in what you send, be liberal in what you accept.

The encapsulation guarantees that an object is always valid, e.g. you can use a constructor to validate all invariants including pre-conditions and post-conditions.

6. Triangulation

As the working memory for humans is very small, you have to decompose and compartmentalize the code structure into smaller chunks that can be easily understood. The author describes a devil’s advocate technique for validating behavior in the unit tests where you try to pass the tests with incomplete implementation, which tells you that you need more test cases. This process can be treated as kind of triangulation:

As the tests get more specific, the code gets more generic

7. Decomposition

The code rot occurs because no one pays attention to the overall quality when making small changes. You can use metrics to track gradual decay such as cyclomatic complexity should be below seven. In order to improve the code readability, the author suggests using 80/24 rule where you limit a method size to be no more than 24 lines and width of each line to be no more than 80 characters. The author also suggests hex flower rule:

No more than seven things should be going on in a single piece of code.

The author defines abstraction to capture essence of an object, i.e.,

Abstraction is the elimination of the irrelevant and the amplification of the essential.

Another factor that influences decomposition is cohesion so that code that works on the same data structure or all of its attributes is defined in the same module or class. The author cautions against the feature envy to decrease the complexity and you may need to refactor the code to another method or class. The author refers to a technique “parse, don’t validate” when validating an object so that the validate method takes less-structured input and produces more-structured output. Next, author describes fractal architecture where a large system is decomposed into smaller chunks and each chunk hides details but can be zoomed in to see the structure. The fractal architecture helps organize the code so that lower-level details are captured in a single abstract chunk and can easily fit in your brain.

8. API Design

This chapter describes principles of API design such as affordance, which uses encapsulation to preserve the invariants of objects involved in the API. The affordance allows a caller to invoke an API only when preconditions are met. The author strengthen the affordance with a poka-yoke (mistake proof) analogy, which means a good interface design should be hard to misuse. Other techniques in the chapter includes: write code for the readers; favor well-named code over comments; and X out names. The X out names replaces API name with xxx and sees if a reader can guess what the API does. For example, you may identify APIs for command-query separation where a method structure like void xxx() can be considered as command with a side effect. In order to communicate the intent of an API, the author describes a hierarchy of communication such as using API’s distinct types, helpful names, good comments, automated tests, helpful commit messages and good documentation.

9. Teamwork

In this chapter, the author provides tips for teamwork and communication with other team mates such as writing good Git commit messages using 50/72 rule where you first write a summary no wider than 50 characters, followed by a blank line and then detailed text with no wider than 72 characters. Other techniques include Continuous Integration that generally use trunk or main branch for all commits and developers make small changes optionally with feature-flags that are frequently merged. The developers are encouraged to make small commits and the code ownership is collective to decrease the bus factor. The author refers to pair programming and mob programming for collaboration within the team. In order to facilitate the collaboration, the author suggests reducing code review latency and rejecting any large change set. The reviewers should be asking whether they can maintain the code, is the code intent clear and could it be further simplified, etc. You can also pull down the code and test it locally to further gain the insight.

10. Augmenting Code

This chapter focuses on refactoring existing code for adding new functionality, enhancing existing behavior and bug fixes. The author suggests using feature-flags when deploying incomplete code. The author describes the strangler pattern for refactoring with incremental changes and suggests:

For any significant change, don’t make it in-place; make it side-by-side.

The strangler pattern can be applied at method-level where you may add a new method instead of making in-place change to an existing method and then remove the original method. Similarly, you can use class-level strangler to introduce new data structure and then remove old references. The author suggests using semantic versioning so that you can support backward compatible or breaking changes.

11. Editing Unit Tests

Though, with an automated test suite, you can refactor production code safely but there is no safety net when making changes to the test code. You can add additional tests, supplement new assertions to existing tests or change unit tests to parametersized tests without affecting existing behavior. Though, some programmers follow a single assertion per test and consider multiple assertions an Assertion Roulette but author suggests strengthening the postconditions in unit tests with additional assertions, which is somewhat similar to the Liskov Substitution Principle that says that subtypes may weaken precondition and strengthen postconditions. The author suggests separating refactoring of test and production code and use IDE’s supported refactoring tools such as rename, extract or move method when possible.

12. Troubleshooting

When troubleshooting, you first have to understand what’s going on. This chapter suggests using scientific method to make a hypothesis, performing the experiment and comparing the outcome to prediction. The author also suggests simplifying and removing the code to check if a problem goes away. Other ways to simplify the code include composing an object graph in code instead of using complex dependency injection; using pure functions instead of using mock objects; merging often instead of using complex diff tools; learning SQL instead of using convoluted object-relational mapping, etc. Another powerful technique for troubleshooting is rubber ducking where you try to explain the problem and gain a new insight in the process. In order to build quality, you should aim to reduce defects to zero. The tests also help with troubleshooting by writing an automated test to reproduce defects before fixing so that they serve as a regression test. The author cautions against slow tests and non-deterministic defects due race conditions. Finally, the author suggests using bisection that uses a binary search for finding the root cause where you reproduce the defect in half of the code and continue until you find the problem. You can also use bisection feature of Git to find the commit that introduced the defect.

13. Separation of Concerns

The author describes Kent Beck’s aphorism:

Things that change at the same rate belong together. Things that change at different rates belong apart.

The principle of separation of concerns can be used for decomposing working software into smaller parts, which can be decomposed further with nested composition. The author suggests using command query separation principle to keep side effects separated from the query operations. Object-oriented composition tends to focus on composing side effects together such as Composite design pattern, which lead to complex code. The author describes Sequential Composition that chains methods together and Referential Transparency to define a deterministic method without side effects. Next, the author describes cross cutting concerns such as logging, performance monitoring, auditing, metering, instrumentation, caching, fault tolerance, and security. The author finally describes Decorator pattern to enhance functionality, e.g., you can add logging to existing code without changing it and log actions from impure functions.

14. Rhythm

This chapter describes daily and recurring practices that software development teams follow such as daily stand-ups. The personal rhythm includes time-boxing or using Pomodoro technique; taking a break; using time deliberately; and touch type. The team rhythm includes updating dependencies regularly, scheduling other things such as checking certificates. The author describes Conway’s law:

Any organization that design a system […] will inevitably produce a design whose structure is a copy of the organization’s communication structure.

You can use this law to organize the work that impacts the code base.

15. The Usual Suspects

This chapter covers usual suspects of software engineering: architecture, algorithms, performance, security and other approaches. For example, performance is often a critical aspect but premature optimization can be wasteful. Instead correctness, an effort to reduce complexity and defects should be priority. In order to implement security, the author suggests STRIDE threat modelling, which includes Spoofing, Tempering, Repudiation, Information disclosure, Denial of service and Elevation of privilege. Other techniques include property-based testing and Behavioral code analysis can be used to extract information from Git to identify patterns and problems.

16. Tour

In this last chapter, the author shows tips on understanding an unfamiliar code by navigating to the main method and finding the way around. You can check if the application uses broader patterns such as Fractal architecture, Model-View-Controller and understands authentication, authorization, routing, etc. The author provides a few suggestions about code structure and file organization such as putting files in one directory though it’s a contestable advice. The author refers to the Hex flower and fractal architecture where you can zoom in to see more details. When using a monolithic architecture, the entire production code compiles to a single executable file that makes it harder to reuse parts of the code in new ways. Another drawback of monolithic architecture is that dependencies can be hard to manage and abstraction can be coupled with implementation, which violates the Dependency Inversion Principle. Further in order to prevent cyclic dependencies, you will need to detect and prevent Acyclic Dependency Principle. Finally, you can use test suite to learn about the system.

Summary

The book is full of practical advice on writing maintainable code such as:

  • 50/72 Rule for Git commit messages
  • 80/24 Rule for writing small blocks of code
  • Tests based on Arrange-Act-Assert and Red-Green Refactor
  • Bisection for troubleshooting
  • Checklists for a new codebase
  • Command Query Separation
  • Cyclomatic Complexity and Counting the Variables
  • Decorators for cross-cutting concerns
  • Devil’s advocate for improving assertions
  • Feature flags
  • Functional core and imperative shell
  • Hierarchy of communication
  • Parse, don’t validate
  • Postel’s law to maintain invariants
  • Regularly update dependencies
  • Reproduce defects as Tests
  • Review code
  • Semantic Versioning
  • Separate refactoring of test and production code
  • Strangler pattern
  • Threat modeling using STRIDE
  • Transformation priority premise to make small changes and keeping the code in working condition
  • X-driven development by using unit-tests, static code analysis, etc.
  • X out of Names

These heuristics help make the software development sustainable so that the team can make incremental changes to the code while maintaining high quality.

May 12, 2022

Applying Laws of Scalability to Technology and People

As businesses grow with larger customers size and hire more employees, they face challenges to meet the customer demands in terms of scaling their systems and maintaining rapid product development with bigger teams. The businesses aim to scale systems linearly with additional computing and human resources. However, systems architecture such as monolithic or ball of mud makes scaling systems linearly onerous. Similarly, teams become less efficient as they grow their size and become silos. A general solution to solve scaling business or technical problems is to use divide & conquer and partition it into multiple sub-problems. A number of factors affect scalability of software architecture and organizations such as the interactions among system components or communication between teams. For example, the coordination, communication and data/knowledge coherence among the system components and teams become disproportionately expensive with the growth in size. The software systems and business management have developed a number of laws and principles that can used to evaluate constraints and trade offs related to the scalability challenges. Following is a list of a few laws from the technology and business domain for scaling software architectures and business organizations:

Amdhal’s Law

Amdahl’s Law is named after Gene Amdahl that is used to predict speed up of a task execution time when it’s scaled to run on multiple processors. It simply states that the maximum speed up will be limited by the serial fraction of the task execution as it will create resource contention:

Speed up (P, N) = 1 / [ (1 - P) + P / N ]

Where P is the fraction of task that can run in parallel on N processors. When N becomes large, P / N approaches 0 so speed up is restricted to 1 / (1 – P) where the serial fraction (1 – P) becomes a source of contention due to data coherence, state synchronization, memory access, I/O or other shared resources.

Amdahl’s law can also be described in terms of throughput using:

N / [ 1 + a (N - 1) ]

Where a is the serial fraction between 0 and 1. In parallel computing, a class of problems known as embarrassingly parallel workload where the parallel tasks have a little or no dependency among tasks so their value for a will be 0 because they don’t require any inter-task communication overhead.

Amdah’s law can be used to scale teams as an organization grows where the teams can be organized as small and cross-functional groups to parallelize the feature work for different product lines or business domains, however the maximum speed up will still be limited by the serial fraction of the work. The serial work can be: build and deployment pipelines; reviewing and merging changes; communication and coordination between teams; and dependencies for deliverables from other teams. Fred Brooks described in his book The Mythical Man-Month how adding people to a highly divisible task can reduce overall task duration but other tasks are not so easily divisible: while it takes one woman nine months to make one baby, “nine women can’t make a baby in one month”.

The theoretical speedup of the latency of the execution of a program according to Amdahl’s law (credit wikipedia).

Brooks’s Law

Brooks’s law was coined by Fred Brooks that states that adding manpower to a late software project makes it later due to ramp up time. As the size of team increases, the ramp up time for new employees also increases due to quadratic communication overhead among team members, e.g.

Number of communication channels = N x (N - 1) / 2

The organizations can build small teams such as two-pizza/single-threaded teams where communication channels within each team does not explode and the cross-functional nature of the teams require less communication and dependencies from other teams. The Brook’s law can be equally applied to technology when designing distributed services or components so that each service is designed as a loosely coupled module around a business domain to minimize communication with other services and services only communicate using a well designed interfaces.

Universal Scalability Law

The Universal Scalability Law is used for capacity planning and was derived from Amdahl’s law by Dr. Neil Gunther. It describes relative capacity in terms of concurrency, contention and coherency:

C(N) = N / [1 + a(N – 1) + B.N (N – 1) ]

Where C(N) is the relative capacity, a is the serial fraction between 0 and 1 due to resource contention and B is delay for data coherency or consistency. As data coherency (B) is quadratic in N so it becomes more expensive as size of N increases, e.g. using a consensus algorithm such as Paxos is impractical to reach state consistency among large set of servers because it requires additional communication between all servers. Instead, large scale distributed storage services generally use sharding/partitioning and gossip protocol with a leader-based consensus algorithm to minimize peer to peer communication.

The Universal Scalability Law can be applied to scale teams similar to Amdahl’s law where a is modeled for serial work or dependency between teams and B is modeled for communication and consistent understanding among the team members. The cost of B can be minimized by building cross-functional small teams so that teams can make progress independently. You can also apply this model for any decision making progress by keeping the size of stake holders or decision makers small so that they can easily reach the agreement without grinding to halt.

The gossip protocols also applies to people and it can be used along with a writing culture, lunch & learn and osmotic communication to spread knowledge and learnings from one team to other teams.

Little’s Law

Little’s Law was developed by John Little to predict number of items in a queue for stable stable and non-preemptive. It is part of queueing theory and is described mathematically as:

L = A W

Where L is the average number of items within the system or queue, A is the average arrival time of items and W is the average time an item spends in the system. The Little’s law and queuing theory can be used for capacity planning for computing servers and minimizing waiting time in the queue (L).

The Little’s law can be applied for predicting task completion rate in an agile process where L represents work-in-progress (WIP) for a sprint; A represents arrival and departure rate or throughput/capacity of tasks; W represents lead-time or an average amount of time in the system.

WIP = Throughput x Lead-Time

Lead-Time = WIP / Throughput

You can use this relationship to reduce the work in progress or lead time and improve throughput of tasks completion. Little’s law observes that you can accomplish more by keeping work-in-progress or inventory small. You will be able to better respond to unpredictable delays if you keep a buffer in your capacity and avoid 100% utilization.

King’s formula

The King’s formula expands Little’s law by adding utilization and variability for predicting wait time before serving of requests:

{\displaystyle \mathbb {E} (W_{q})\approx \left({\frac {\rho }{1-\rho }}\right)\left({\frac {c_{a}^{2}+c_{s}^{2}}{2}}\right)\tau }
(credit wikipedia)

where T is the mean service time, m (1/T) is the service rate, A is the mean arrival rate, p = A/m is the utilization, ca is the coefficient of variation for arrivals and cs is the coefficient of variation for service times. The King’s formula shows that the queue sizes increases to infinity as you reach 100% utilization and you will have longer queues with greater variability of work. These insights can be applied to both technical and business processes so that you can build systems with a greater predictability of processing time, smaller wait time E(W) and higher throughput ?.

Note: See Erlang analysis for serving requests in a system without a queue where new requests are blocked or rejected if there is not sufficient capacity in the system.

Gustafson’s Law

Gustafson’s law improves Amdahl’s law with a keen observation that parallel computing enables solving larger problems by computations on very large data sets in a fixed amount of time. It is defined as:

S = s + p x N

S = (1 – s) x N

S = N + (1 – N) x s

where S is the theoretical speed up with parallelism, N is the number of processors, s is the serial fraction and p is the parallel part such that s + p = 1.

Gustafson’s law shows that limitations imposed by the sequential fraction of a program may be countered by increasing the total amount of computation. This allows solving bigger technical and business problems with a greater computing and human resources.

Conway’s Law

Conway’s law states that an organization that designs a system will produce a design whose structure is a copy of the organization’s communication structure. It means that the architecture of a system is derived from the team structures of an organization, however you can also use the architecture to derive the team structures. This allows defining building teams along the architecture boundaries so that each team is a small, cross functional and cohesive. A study by the Harvard Business School found that the often large co-located teams tended to produce more tightly-coupled and monolithic codebases whereas small distributed teams produce more modular codebases. These lessons can be applied to scaling teams and architecture so that teams and system modules are built around organizational boundaries and independent concerns to promote autonomy and reduce tight coupling.

Pareto Principle

The Pareto principle states that for many outcomes, roughly 80% of consequences come from 20% of causes. This principle shows up in numerous technical and business problems such as 20% of code has the 80% of errors; customers use 20% of functionality 80% of the time; 80% of optimization improvements comes from 20% of the effort, etc. It can also be used to identify hotspots or critical paths when scaling, as some microservices or teams may receive disproportionate demands. Though, scaling computing resources is relatively easy but scaling a team beyond an organization boundary is hard. You will have to apply other management tools such as prioritization, planning, metrics, automation and better communication to manage critical work.

Metcalfe’s Law

The Metcalfe’s law states that if there are N users of a telecommunications network, the value of the network is N2. It’s also referred as Network effects and applies to social networking sites.

Number of possible pair connections = N * (N – 1) / 2

Reed’s Law expanded this law and observed that the utility of large networks can scale exponentially with the size of the network.

Number of possible subgroups of a network = 2N – N – 1

This law explains the popularity of social networking services via viral communication. These laws can be applied to model information flow between teams or message exchange between services to avoid peer to peer communication with extremely large group of people or a set of nodes. A common alternative is to use a gossip protocol or designate a partition leader for each group that communicates with other leaders and then disseminate information to the group internally.

Dunbar Number

The Dunbar’s number is a suggested cognitive limit to the number of people with whom one can maintain stable social relationships. It has a commonly used value of 150 and can be used to limit direct communication connections within an organization.

Wirth’s Law and Parkinson’s Law

The Wirth’s Law is named after Niklaus Wirth who observed that the software is getting slower more rapidly than hardware is becoming faster. Over the last few decades, processors have become exponentially faster as a Moor’s Law but often that gain allows software developers to develop more complex software that consumes all gains of the speed. Another factor is that it allows software developers to use languages and tools that may not generate more efficient code so the code becomes bloated. There is a similar law in software development called Parkinson’s law that work expands to fill the time available for it. Though, you also have to watch for Hofstadter’s Law that states that “it always takes longer than you expect, even when you take into account Hofstadter’s Law”; and Brook’s Law, which states that “adding manpower to a late software project makes it later.”

The Wirth’s Law, named after Niklaus Wirth, posits that software tends to become slower at a rate that outpaces the speed at which hardware becomes faster. This observation reflects a trend where, despite significant advancements in processor speeds as predicted by Moor’s Law , software complexity increases correspondingly. Developers often leverage these hardware improvements to create more intricate and feature-rich software, which can negate the hardware gains. Additionally, the use of programming languages and tools that do not prioritize efficiency can lead to bloated code.

In the realm of software development, there are similar principles, such as Parkinson’s law, which suggests that work expands to fill the time allotted for its completion. This implies that given more time, software projects may become more complex or extended than initially necessary. Moreover, Hofstadter’s Law offers a cautionary perspective, stating, “It always takes longer than you expect, even when you take into account Hofstadter’s Law.” This highlights the often-unexpected delays in software development timelines. Brook’s Law further adds to these insights with the adage, “Adding manpower to a late software project makes it later.” These laws collectively emphasize that the demand upon a resource tends to expand to match the supply of the resource but adding resources later also poses challenges due to complexity in software development and project management.

Dunbar Number

The Dunbar’s number is a suggested cognitive limit to the number of people with whom one can maintain stable social relationships. It has a commonly used value of 150 and can be used to limit direct communication connections within an organization.

Summary

Above laws shows how you can partition tightly coupled architecture and large teams into modular architecture and small autonomous teams. For example, Amdahl’s and Universal Scalability laws demonstrate that you have to account for the cost of serial work, coordination and communication between partitions as you parallelize the problem because they become bottleneck as you scale. Brook’s and Metcalfe’s laws indicate that you will need to manage the number of communication paths among modules or teams as they can explode quadratically thus stifling your growth. Little’s law and King’s formula establishes that you need to reduce inventory or work in progress and avoid 100% utilization in order to provide reliable throughput. Conway’s law shows how architecture and team structures can be aligned for maximum autonomy and productivity. This allows you to accomplish more work by using small cross functional teams who own independent product lines and build modular architecture to reduce dependency on other teams and subsystems. Pareto principle can be used to make small changes to the architecture or teams that results in higher scalability and productivity. Wirth’s Law and Parkinson’s Law, when applied judiciously, can be instrumental in enhancing efficiency in software development. By setting more stringent timelines and clear, concise objectives, it can counteract the tendency for work to expand to fill the available time. Dunbar number only applies to people but it can be used to limit dependencies for external teams as a human mind has a finite capacity to maintain external relationships. However, before applying these laws, you should have clear goals and collect proper metrics and KPIs so that you can measure the baseline and improvements from these laws. You should also be cautious when applying these laws prematurely for scalability as it may make things worse. Finally, when solving scalability and performance related problems, it is vital to focus on global optimization to scale an entire organization or the system as opposed to a local optimization by focusing strictly only on a specific part of the system.

March 6, 2018

Tips from the second edition of “Release It!”

Filed under: Design,Methodologies,Technology — admin @ 4:19 pm

The first edition of “Release It!” has been one of most influential books that I have read and it introduced a number of methods for writing fault tolerant systems such as circuit-breaker, bulkhead patterns so I was excited to read the second edition of the book when it came out. Here are a few tips from the book:

In the first chapter, the author defines stability in terms of robustness, i.e., “A robust system keeps processing transactions, even when transient impulses, persistent stresses, or component failures disrupt normal processing.” He recommends focusing on longevity bugs such as resource leaking, e.g. you can set timeout when invoking a network or database operation or detect dead connections before reading/writing. The author cautions against tightly coupled systems that can easily propagate failures to other parts of the system. Similarly, high number of integration points can increase probability of failure in one of those dependencies. The author suggests looking into the blocking calls that can lead to deadlocks when using multiple threads or scrutinizing resource pool that gets exhausted from blocking operations. Another cause of instability is chain reaction from failure of one of servers that increases load on the remaining servers, which can be remedied using bulkhead or circuit-breaker patterns. High memory usage on the server can also constrain resources and the author recommends using external caching systems such as Redis, memcache, etc. In order to monitor health of the system, the author suggests using a mock transaction to ensure it’s working as expected and keeping metrics on errors especially login errors, high latency warnings. One common self-inflicting failure can be caused by self-denial attacks by marketing campaign that can be mitigated by using shared-nothing architecture or reducing fan-in shared resources. The servers can also be subjected to dogpile effect, where resources after upgrade, cronjob or config change spike up that can be mitigated by using random clock skew, adding random jitter and using exponential backoff. Finally, the author recommends monitoring slow responses, removing unbounded result sets and failing fast.

Here is a summary of stability patterns that the author recommends:

  • Apply timeout with integration points and delayed retries when failure occurs.
  • Apply circuit-breaker to prevent cascading failures (along with timeout).
  • Apply bulk-head pattern to partition the system in the event of chain reaction failure.
  • Steady state
  • Data purging
  • Log files
  • Fail fast, restart fast and reintegrate
  • Let it crash with limited granularity, e.g. boundary of actor.
  • Supervision for monitoring, restarts, etc.
  • Shed Load
  • Back-pressure – queues must be finite for finite response time
  • Governor – create governor to slow rate of actions (from automated response) so that humans can review it.

Next few chapters focus on network, machines and processes for building and deploying the code. The author offers several mechanisms for scaling such as load balancing with DNS, using service registry for upgrading and fail-over, configuration, transparency, collecting logs and metrics, etc. The author recommends load shedding when under high load or use HTTP 503 to notify load balancer. For example, queue length can be calculated as: (max-wait-time / mean-processing-time + 1) * processing-threads * 1.5. You can use listen reject queue to return 503 error to prevent clients from reconnecting immediately. The control-panel chapter recommends tools for administration. It recommends postmortem template such as what happened, apologize, commit to improvement and emphasizes system failures (as opposed to human errors). The author recommends adding indicators such as traffic indicators, business transaction, users, resource pool health, database connection health, data consumption, integration point health, cache health, etc.

The security chapter offers standard best practices from OWASP such as using parameterized queries to protect against SQL injection, using high entropy random session-ids/storing cookies for exchanging session-ids to protect against session hijacking/fixation. In order to protect against XSS (when user’s input is rendered in HTML without escaping) by filtering input and escaping it when rendering. The author recommends using a random nonce and strict SameSite policy to protect against CSRF. Similarly, author recommends using the principle of least privilege, access control, etc. The admin tools can offer tools for resetting circuit breakers, adjust connection pool sizes, disabling specific outbound integrations, reloading configuration, stopping accepting load, toggling feature flags.

For ease of deployment, the author recommends automation, immutable infrastructure, continuous deployment, and rolling changes incrementally.

The author suggests several recommendations on process and organization such as OODA loop for fast learning, functional/autonomous teams, evolutionary architecture, asynchrony patterns, loose clustering and creating options for future.

Lastly, the author offers chaos engineering as a way to test resilience of your system using Simian army or writing your own chaos monkey. In the end, the new edition offers a few additional chapters on scaling, deployment, security, and chaos engineering and more war stories from author’s consulting work.

February 24, 2009

Does software quality matters?

Filed under: Methodologies — admin @ 9:24 pm

In last couple of weeks, I followed interesting debate between Bob Martin and Joel Spolsky/Jeff Attwood (listen Hanselminutes) on the issue of SOLID principles. The SOLID acronym stands for five principles, i.e.,

  • Single Responsibility Principle – this principle is based on the principle of Cohesion from Tom Demarco’s Structured Analysis and Design book and mandates that a class should have one, and only one, reason to change.
  • Open Closed Principle – this principle is based on Bertrand Meyer’s book on Object Oriented Software Construction that says that a class should be open to extend without modifying it.
  • Liskov Substitution Principle – this principle was introduced by Barbara Liskov that says derived classes must be substitutable for their base classes.
  • Dependency Inversion Principle – states that your should depend on abstractions, not on concrete implementation.
  • Interface Segregation Principle – states that you should make fine grained interfaces that are client specific.

I learned these principles many years ago and attended Bob Martin in Best Practices 2004 conference, where he talked about these principles. These principles sound good, though in reality they should be a broader guidelines rather than principles. But the heat of the debate was the way these principles have been evangelized by Bob Martin who insisted on using these all the time. Over the years, I have seen similar debates between Cedric Beust and Bob Martin over the use of test driven development. There is also debate on topic of TDD and architecture between Bob Martin and Jim ‘O Coplien. Overall, I find that the issues from these debates boil down to the following:

Usable software vs High quality but unused software

One of the controversial point that Jeff Attwood raised was that the quality does not matter if no one is using your application. I find there is lot of truth to that. In fact, this is the route that most startup takes when developing a new idea and trying something new. For example, everyone was blaming Twitter for choosing Rails when it had scalability and reliability issues. However, Twitter would not have existed if it was written with most scalable and reliable language or framework that took two more years or if the application itself wasn’t as useful. I tend to follow the age old advice: First make it work, then make it right and then make it fast. There is a reason why rapid prototyping frameworks such as Rails, Django, Zend framework, etc. are popular because they allow you to focus more on business functionality and to reduce time to market. So, I agree the first goal of the software should be to make the software that solves real problems or add value. Nevertheless, if first implementation is horrible then it takes hercules effort to make it right and some companies like Friendster never recover.

Customer Experience vs Internal design

One of the earliest advice I got on software development was to write manual before writing the code. It focuses you to solve business problem of the customer rather than writing with top down architecture, which is in a way similar to behavioral driven design in spirit. I find that most useful software developed bottom up and driven by the users. Kent Beck often says that you can’t hide a bad architecture with a good GUI. Nevertheless, for an average user, the usability matters a lot. I remember back in early 90s, IBM OS/2 was superior operating system than Windows but largely loss the market due to the usability (and marketing) issues. The same could be told why people think Macs are better than PC. Rails is also a good example that became popular because you could whip up a webapp in ten minutes despite the fact that its code has been plagued with maintenance issues from monolithic architecture and tricks like chain of alias methods. Other examples include WordPress and Drupal both written in PHP and are the leader in the blogging and content management area due to their usability rather than quality of the code. Again as your software crosses some threshold of number of users it may have to be redesigned, e.g. Rails recently announced that it will merge with another framework Merb in 3.0 because Merb has been designed with micro-kernel and plugable architecture. This also reminds me of merge between Struts and WebWork that turned out to be failure. Joel Spolsky cautions about software rewrites in his blogs and book and I have also blogged on Software rewrites earlier. In the end, you want to extend your application incrementally using Strangler Fig model, which is not an easy thing to do. Ultimately, people matters more than technology, processes or best practices in software development as good people can ship a good software regardless of the language or tools you use.

Evolutionary architecture vs Up front architecture

This has been brought up in debate between Jim Coplien and Bob Martin, where Jim took the position of more upfront design and architecture and Bob took position of evolutionary architecture. I have a lot of respect for Jim Coplien, I still have a copy of Advanced C++ I bought back in ’92 and it introduced me to the concepts of abstraction, body/handle, envelop/letter principles which are sort of DIP. In the interview with Bob Martin, Jim Coplien raised a lot of good points that YAGNI and test-driven based bottom design can create architecture meltdown. Though, I believe good software is developed bottom up, but I do believe some architecture and domain analysis beneficial. I am not necessarily suggesting BDUF (big design up front) or BRUF (big requirements up front), but iteration 0 style architecture and domain analysis when solving a complex problem. For example, the domain driven design by Eric Evans or Responsibility driven design by Rebecca Wirfs-Brock require working closely with the domain experts to analyze the business problems and capturing essential knowledge that may not be obvious. Any investment in proper domain analysis simplifies rest of development and make your application more extensible. There are a number of agile methodologies such as feature driven development and DSDM that encourage some upfront architecture and domain analysis, which I find beneficial.

Extensibility and maintenance

Once your product is hit and loved by users, your next task becomes extending it and adding more features. At this stage, all -ilities such as scalability, performance, security, extensibility becomes more important. Each team can decide on what practices and principles are appropriate and follow them. Agile methodologies encourage collective ownership and pair programming that can spread knowledge and skills, though there are some alternatives such as code reviews or mentoring. I think, having a technical lead who ensures overall quality and keeps the bar up for rest of developers can help with extensibility and maintenance.

Test driven Development

Bob Martin has been adamant proponent of test driven development with his three laws of TDD. I blogged about original debate between Cedric Beust and Bob Martin back in June, 2006 and showed how Bob Martin’s position was not pragmatic. This reaction has also been echoed by Joel, Jeff, Cedric and Jim, who agree 100% coverage is unrealistic. Lately, more and more people are joining this group. I watched recently an interview of Luke Francl who expressed similar sentiments. In my earlier blog, I wrote various types of testing required for writing a reliable and enterprise level software. One of the selling point of unit testing has been ease of refactoring with confidence but I have also found too many unit tests stifle refactoring because they require changing both the code and unit tests. I have found that testing only public interfaces or a bit high level testing without dependency on internal implementation can produce reliable software and also is less prone to breakage when doing refactoring. Though, no one refuses value of testing, but it needs to be practical.

Dependency management

A number of SOLID principles and extended principles such as reuse principle, dependency principles mandates removing or minimizing dependencies between packages. Those principles were made in C++ environment that historically had problems with managing dependencies. Again, the problem comes from religious attitude with these principles. For example, I have worked in companies where different teams shared their code with java jar files and it created dependency hell. I have used jDepend in past to reduce such dependency but it’s not needed in all situation. I am currently building a common platform that will be used by dozens of teams and I intentionally removed or minimized static dependencies and instead used services.

Conclusion

Unfortunately in software industry, new buzzwords often appear every few years that are often used to foster consulting or selling books. There are lots of books on principles, best practices, etc. and the entire agile movement has turned into prescriptive methodologies as opposed to being pragmatic and flexible based on the environment. People like uncle bob use these principles in draconian fashion such as always stressing on 100% test coverage or always keeping interfaces separate from implementation (DIP). Such advice may bring more money from consulting work or publication, but is disingenuous for practical use. In the end, people matters more in software development than arbitrary technology or principles and you need good judgement to pick what practices and solutions suit the problem at hand.

October 14, 2008

Observe, Orient, Decide and Act

Filed under: Methodologies — admin @ 8:49 pm

I have heard agile evangilists talk about influence of John Boyd on agile methodologies. I have also read about his story in books like Release It and more recently in Simple Architectures for Complex Enterprises, which I am currently reading. You can read more about Boyd’s background from wikipedia link above but he came up with iterative cycle of: observe, orient, decide and act (OODA). One of key finding he made was that shorter feedback or iteration loop of OODA with low quality was better than longer or tiring cycle of OODA with high quality. Despite the fact that everyone calls his/her organization agile, this feedback loop is real essense of agility.

July 7, 2008

Reaction vs Preparedness

Filed under: Methodologies — admin @ 8:36 pm

I’ve had struggled with culture of fire-fighting and heroism and discussed these issues several time in my blog [1], [2], [3], [4]. Over the weekend, I saw Bruce Schneier’s talk on security. He talked about Hurricane Katrina’s devastation and said that politicians don’t like to invest in infrastructure because it does not get noticed. He showed how reaction is considered more important than preparedness. Another thing he mentioned was that investing in new projects is received better than investing in existing projects. I find that IT culture has very similar value system, where heros or martyrs who react (on daily basis) to emerging crisis and fires are noticed by the managers and receive promotions and accolades. The same culture does not value in investing to better prepare for disasters or emergencies. Unfortunately, once people get promoted they moved on and leave the pile of shit for someone else. Of course, the next team would rewrite rather than keeping the old system. In last two years, I had to rewrite two major systems where the previous systems were written less than two years ago. May be they will be rewritten by someone else when I leave. The cycle will continue…

May 19, 2008

Sprint like Hell!

Filed under: Methodologies — admin @ 9:44 pm

I think Sprint is wrong metaphore for iteration, because managers seem to think that it means developers will run like hell to finish the work. Though, my project has adopted Scrum recently, but it is far from the true practices, principles and values of agility. For past many months, we have been developing like hell and as a result our quality has suffered.

Powered by WordPress