Shahzad Bhatti Welcome to my ramblings and rants!

September 6, 2020

Review of “Software Engineering at Google”

Filed under: Computing,Technology — admin @ 5:09 pm

In “Software Engineering at Google”, engineers from Google share practices from software development life-cycle at Google. Here are a few lessons from the book that are applicable to most engineers while omitting Google’s unique internal practices and tools:

Software Engineering

Hyrum’s law

With a sufficient number of users of an API, it does not matter what you promise in the contract: all observable behavior of your system will be depended on by somebody.

Shifting Left

Finding problems earlier in the developer workflow usually reduces costs.

Hiding considered Harmful

Share ideas early to prevent personal missteps and vet your ideas.

Bus Factor

Disperse knowledge to reduce the bus factor.

3 Pillars of social interaction

  • Humility (lose the ego and learn to give/take criticism)
  • Respect
  • Trust (fail fast and iterate)

Blameless Post-Mortem Culture

  • Summary
  • Timeline of event
  • Primary cause
  • Impact/damage assessment
  • Actions for quick fix
  • Actions for prevent in future
  • Lessons learned.

Knowledge sharing

  • Psychological safety
  • Respect
  • Recognition
  • Developer guides
  • static analysis
  • newsletter
  • readability certification – where each Changelist (CL) requires readability approval from readability certified engineer

Leadership

Servant leadership

  • Create an atmosphere of humility and trust
  • Helping a team achieve consensus and serve your team

Antipattern

  • Hire pushovers
  • Ignore low performers
  • Ignore human issues
  • Be everyone’s friend
  • Compromise the hiring bar
  • Treat your team like children.

Positive patterns

  • Lose the ego – ownership, accountability, responsibility
  • Find people who can give constructive feedback
  • Be a zen master – leader is always on stage, maintain calmness, ask questions
  • Be a catalyst (build consensus)
  • Remove roadblocks
  • Be a teacher and mentor
  • Set clear goals – create mission statement for the team
  • Be honest (give hard feedback without using compliment sandwich)
  • Track happiness – recognition
  • Delegate but get your hands dirty
  • Seek to replace yourself
  • Know when to make waves
  • Shield your team from chaos
  • Give your team air cover – defend team from uncertainty and frivolous demands
  • Let your team know when they are doing well.

Leading

  • Always be deciding (weigh trade-offs)
  • Identify the blinder
  • Identify the key trade-offs
  • Decide/Iterate (e.g. trade-offs within web search are latency, quality and capacity – pick two)
  • Always be leaving
  • Build a self-driving team
  • Divide the problem space (delegate sub-problems to leaders)
  • Anchoring a team’s identity (rather than putting a team in charge of a specific product/solution, anchor team to the problem)
  • Always be scaling
  • Cycle of success: analysis (trade-off/consensus) -> struggle (fake it) -> traction (progress) -> Reward (solves new problem)
  • Important vs Urgent (delegate urgent things, dedicate time, tools such as GTD)
  • Learn to drop balls (split tasks between top 20%, bottom %20, middle %60 – drop bottom 80%)
  • Protecting your energy (vacation/breaks).

Measurement

Goals:

Goal is a desired result/property without reference to metric – QUANTS (Quality, Attention, Intellectual, Tempo, Satisfaction), e.g. quality of the code, attention from engineers, intellectual complexity, tempo/velocity and satisfaction.

Signal:

Signal are things that need to be measured – may not be measurable.g. if goal is learning from readability, signals can be reporting learning from the readability process.

Metrics:

Metric is a proxy for signal, e.g. quantitative metrics such as readability survey how readability process has improved the code quality. Each metric should be traceable.

Styling guiding principles

  • Rules must pull their weight
  • Optimize for the reader
  • Be consistent (scaling, minimize ramp-up, resilience to time)
  • Setting the standard (coding conventions)
  • Avoiding error-prone/surprising constructs
  • Concede to practicalities.
  • Use tools such as error checkers, code formatters, etc

Code Review

  • Correctness & comprehension
  • Approval from OWNERS (stored as a regular file in repository)
  • Approval from readability
  • Knowledge sharing
  • Be polite & professional
  • Write small changes
  • Write good change description
  • Keeping reviewers to minimum
  • Automate where possible

Documentation

  • Know your audience (experience level, domain knowledge, purpose)
  • Documentation types:
    • Reference documentation
    • Design documents
    • Tutorials
    • Conceptual documentation
    • Landing pages)
  • Documentation philosophy (WHO, WHAT, WHEN, WHERE and WHY).

Testing

Benefits

  • Less debugging
  • Increased confidence in changes
  • Improved documentation
  • Simpler reviews
  • Thoughtful design, fast/high quality releases
  • Test scope – 5% functional, 15% integration, 80% unit
  • Beyonce Rule – If you liked it, then you shoulda put a test on it
  • Pitfall of large test suits (no more mocks)
  • Test certified (project health pH tool to gather metrics such as test coverage, test latency, etc)

Unit testing

  • Prevent brittle tests
  • Strive for unchanging tests (pure refactoring, new features, bug fixes, behavioral changes)
  • Test via public APIs (to avoid brittle tests)
  • Test State, Not Interactions (avoid mock objects)
  • Writing clear tests, Make tests complete and concise (using helper methods object-factories, assertions)
  • Test behavior, Not methods (assert each behavior in separate method instead of testing each method per test, e.g. display_showName, display_showWarning)
  • Structure tests to emphasize behavior
    • comment Given/When/Then
    • use And to break it further
    • can have multiple combinations of When/Then for dependent behavior
  • Name tests after the behavior being tested
  • Don’t put logic in tests
  • Write clear failure messages
  • Test and code sharing: DAMP (descriptive and meaningful phrases), Not DRY (duplicating some construction logic of objects instead of helper methods)
  • Shared Values – use builder methods to construct objects instead of static constants
  • Shared Setup (be careful with too much dependencies in common setup)
  • Share helpers and validations/assertions
  • Define test infrastructure – sharing code across multiple test suites

Test Doubles

  • Testable code
  • Applicability to avoid brittleness and complexity
  • Fidelity in how close the behavior mimics real implementation
  • Avoid mocking frameworks if possible
  • Seams – Use Dependency injection to make the code testable

Techniques

  • Faking – lightweight implementation but low fidelity
  • Stubbing – specify the expected behavior with Mocks
  • Interaction testing – verifying method is called properly but it can lead to complex tests so avoid it
  • Real implementation – high fidelity and give more confidence but evaluate based on execution time, determinism
  • Prefer State testing over interaction testing and use interaction testing only for state changing functions
  • Avoid over specification.

Large functional tests

  • Obtain a system under test, seed data, perform actions, verify behavior
  • You may use multiple tests in chains and store intermediate data so that output of one test is used as input to another
  • Each SUT is judged based on hermeticity (SUT’s isolation from usage and interactions from other components) and fidelity (SUT’s accuracy in reflecting the prod environment). For example, staging tests use staging deployment but it requires code to be deployed there. Avoid 3rd party dependencies in SUT environment and use doubles to fake it
  • You can also use record/play proxies or use consumer-driven contract that defines contract for client and provider of the service (Pact contract testing)

Test data

  • Seeded data
  • Test traffic
  • Domain data – pre-populated data in database
  • Realistic baseline/data
  • Seeding API.

Verification

  • Manual
  • Assertions

Types of larger tests

  • Functional testing
  • Browser and device testing
  • Performance
  • Load and stress testing
  • Deployment configuration testing
  • Exploratory testing (manual)
  • Bug bashes
  • A/B diff regression testing
  • UAT, Probes and canary analysis (in prod)
  • Disaster recovery and chaos engineering
  • User evaluation

Version Control and Build System

Google uses Mono-repo, one-version rule for version control to avoid confusing choices and a task based build system. All dependencies also follow one-version rule to simplify deployment. Google also uses static analysis/linters to provide feedback on code before it’s committed.

Dependency management

Google uses semantic versioning for managing dependencies. You can think of dependency as a directed graph and requirements as edges. The process of finding a mutually compatible set of dependencies is akin to SAT-solvers. Minimum version selection (MVS) can be used to find next higher version to make dependencies compatible as semantic version is not reliable way to trust backward compatibility.

Continuous integration

The code goes through edit/compile -> pre-submit -> post-submit -> release-candidate -> rc-promotion -> final-rc phases of time time. The CI provides fast feedback using automated and continuous builds and delivery. Pre-submit uses fast unit tests and post-submit uses large tests (hermetic tests against test environment with greater determinism and isolation) to verify changes and SLO.

Continuous delivery

Google uses idioms of agility (small batches), automation, isolation, reliability, data-driven decision making, and phases rollout for continuous delivery. It uses shifting left to identify problems earlier and ship only what gets used.

No Comments

No comments yet.

RSS feed for comments on this post. TrackBack URL

Sorry, the comment form is closed at this time.

Powered by WordPress