<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Growth Engineering]]></title><description><![CDATA[Dedicated to exploring all things growth engineering, providing valuable insights to empower you on your path to drive business impact in your organizations 🚀]]></description><link>https://www.growthengineering.xyz</link><generator>Substack</generator><lastBuildDate>Sun, 03 May 2026 10:02:16 GMT</lastBuildDate><atom:link href="https://www.growthengineering.xyz/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Satheesh Kumar]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[satheesh@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[satheesh@substack.com]]></itunes:email><itunes:name><![CDATA[Satheesh Kumar]]></itunes:name></itunes:owner><itunes:author><![CDATA[Satheesh Kumar]]></itunes:author><googleplay:owner><![CDATA[satheesh@substack.com]]></googleplay:owner><googleplay:email><![CDATA[satheesh@substack.com]]></googleplay:email><googleplay:author><![CDATA[Satheesh Kumar]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Change Log: Unlocking the Power of Experimentation]]></title><description><![CDATA[In an era where experimentation drives innovation and growth, maintaining a structured and centralized record of all experiments is paramount.]]></description><link>https://www.growthengineering.xyz/p/change-log-unlocking-the-power-of</link><guid isPermaLink="false">https://www.growthengineering.xyz/p/change-log-unlocking-the-power-of</guid><dc:creator><![CDATA[Satheesh Kumar]]></dc:creator><pubDate>Sun, 01 Dec 2024 22:44:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a5e0b8cf-e05d-48b2-823c-8a38224f6e96_2570x1428.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>In an era where experimentation drives innovation and growth, maintaining a structured and centralized record of all experiments is paramount. Growth teams need to understand the customer, execute on the right things, and deliver results for the business. This is where the concept of a change log for experimentation comes into play.</em></p><h3><strong>1. What is a Change Log for Experimentation?</strong></h3><p>A change log for experimentation is essentially a detailed and structured records of all the experiments a team or organization runs. Think of it as a diary of every hypothesis you test, timeline, experimentation design document, result you achieve and learnings you unlock. It begins from the idea backlog stage and continues through to the final results and beyond.</p><h3><strong>2. Why is it Essential?</strong></h3><ul><li><p><strong>Knowledge Base Creation:</strong></p><ul><li><p>A Change Log acts as a comprehensive knowledge base, cataloging all the experiments, metrics, cohorts, growth area, documentation including valuable insights from completed experiments and the evolution of ideas. Having a central record ensures that every team member, both present and future, can quickly understand what was tried, what worked, and what failed. Onboarding new team members, as it provides them with a wealth of knowledge and insights at their fingertips.</p></li></ul></li><li><p><strong>Measure Experimentation Program Success:</strong></p><ul><li><p>Measuring success in experimentation often hinges on more than just the number of experiments conducted. Relying solely on this metric can lead to less ambitious optimization experiments and miss out on the complete picture. To truly gauge success, it's essential to consider multiple factors in addition, like impact estimations, time to build, and comparing expected outcomes with actual results. This fosters an environment of perpetual improvement and responsibility. This not only assesses the precision in forecasting impacts and required engineering efforts but also refines future estimates, ultimately boosting the experimentation program&#8217;s success rate. </p></li></ul></li><li><p><strong>Control Tower Functionality:</strong></p><ul><li><p>The change log can also serve as a control tower, helping teams avoid collisions by providing real-time information on upcoming and ongoing experiments. It enables teams to coordinate better and streamline the experimentation process.</p></li></ul></li><li><p><strong>Enhanced Insights Sharing:</strong></p><ul><li><p>With a centralized change log, knowledge sharing among teams becomes streamlined and efficient. Team members can quickly understand the state and outcome of ongoing or completed experiments across all the growth areas without sifting through scattered data. This is vital when there are multiple growth teams, it helps leverage user learnings derived from experiments widely across the company.</p></li></ul></li><li><p><strong>Timeline Construction &amp; Key Metrics Correlation:</strong></p><ul><li><p>A change log aids in building a timeline of past changes as necessary, allowing teams to correlate these changes with impacts on key metrics. This  is essential for understanding the long-term impacts experiment launches have made and making informed decisions for future initiatives.</p></li></ul></li></ul><h3><strong>3. Getting Started</strong></h3><p>To facilitate a smooth transition to using change logs, here are two templates:</p><ul><li><p><strong><a href="https://docs.google.com/spreadsheets/d/19N3qo6wy9ezgJ5Gt87sTPLZHRmG27n_HdShr-RmXyCc/edit?usp=sharing">Simple Change Log Template</a>:</strong></p><ul><li><p>Ideal for a team new to experimentation, this template is straightforward and easy to use. It includes basic fields that cover the essential aspects. Use along with experimentation design document.</p></li></ul></li><li><p><strong><a href="https://www.airtable.com/universe/expZpCNVlkaoLGNAr/evelyn-experiment-velocity-engine-lifting-your-numbers">Airtable EVELYN (Experiment Velocity Engine Lifting Your Numbers) Template</a>:</strong></p><ul><li><p>This comprehensive template built by Darius is for teams looking for a more sophisticated and detailed approach to documenting their experiments with better filtering capabilities and formulas.</p></li></ul></li></ul><h3><strong>4. Leveraging Technology</strong></h3><p>While templates are beneficial, a custom internal tool can elevate the effectiveness and avoid manual creation and maintenance of spreadsheets. Depending on resources available, size and maturity of the company, this may be valuable. Such a tool, for instance, can:</p><ul><li><p><strong>Enable Advanced Search:</strong> Easily find past experiments based on various criteria.</p></li><li><p><strong>Highlight Key Learnings:</strong> Quickly see the main takeaways from each test to subscribers and leaders.</p></li><li><p><strong>Visualize with Timelines:</strong> Understand the sequence of experiments and their impacts over time.</p></li><li><p><strong>Provide a Control Tower View:</strong> Get a holistic view of all ongoing and completed experiments.</p></li></ul><h3><strong>5. Conclusion</strong></h3><p>Maintaining a change log for experimentation isn't just about record-keeping; it's about harnessing the collective intelligence of your team. It's a testament to the value of every test, every insight, and every innovation. Whether you're just starting or scaling rapidly, having a structured way to capture and share learnings can make all the difference.</p>]]></content:encoded></item><item><title><![CDATA[Experimentation Design & Documentation Template]]></title><description><![CDATA[Execute well-crafted experiments.]]></description><link>https://www.growthengineering.xyz/p/experimentation-design-and-documentation</link><guid isPermaLink="false">https://www.growthengineering.xyz/p/experimentation-design-and-documentation</guid><dc:creator><![CDATA[Satheesh Kumar]]></dc:creator><pubDate>Fri, 01 Nov 2024 21:40:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4730ca5-9d69-4ce0-a16e-13ff811cee9f_500x500.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="pullquote"><p><em>An experimentation design document is crucial in helping an experiment owner think through all cases&#8212;to frame, provide clarity, and democratize knowledge cross functionally, improving future strategies and optimizing resources. It also  enhances collaboration and decision-making within the team. </em></p><p><em>Scroll down for the template you can use immediately</em>, it should take ~10mins to leverage it, enjoy!</p></div><h1><strong>1. Problem Statement &#128679;</strong></h1><p><strong>What is the business outcome?</strong><br>Start with the outcome we want to improve. What metric or area shows there's an opportunity to drive meaningful impact? </p><p><strong>What is the customer problem?</strong><br>What problem are we solving, and for whom? Why does this problem exist? Focus on the user's experience and pain point.</p><p><strong>What observation, data, or insight led us to identify the problem or opportunity?</strong><br>What research, insight, or data surfaced this as something worth exploring?</p><p><strong>How does this connect to the current growth model and strategy?</strong><br>Which part of the growth model (e.g., acquisition, activation, retention) does this address? How does solving this problem tie back to the company&#8217;s broader strategy and mission?</p><p>You can also provide links to related artifacts e.g. user studies, research that&#8217;s been done, and previous experiments in this area.</p><h1><strong>2. Hypothesis &#129514;</strong></h1><p>We believe that by changing the <strong>[independent variable] </strong>we expect <strong>[dependent variable]</strong> (not) to <strong>[increase/decrease]</strong> because <strong>[some reason]</strong>.</p><p><em>Example:</em></p><p>We believe that by changing <strong>product suggestions at checkout from static to personalized </strong>we expect <strong>average revenue per user to increase</strong> because <strong>users are more likely to buy products that are relevant to them</strong>.</p><p><em>A hypothesis is something that we believe to be true based on what we know about users. Hypothesis should be testable and falsifiable&#8202;&#8212;&#8202;that means there&#8217;s something out there you can practically observe that would lead you to reconsider the hypothesis.&nbsp;&nbsp;</em></p><ul><li><p><em>Start with the words &#8220;we believe&#8221;</em></p></li><li><p><em>Use the word &#8220;because&#8221;</em></p></li><li><p><em>Don&#8217;t use &#8220;if&#8221; or &#8220;then&#8221; (that&#8217;s a prediction)</em></p></li></ul><p><em>Resources:</em></p><ul><li><p><em><a href="https://medium.com/@talraviv/thats-not-a-hypothesis-25666b01d5b4">Clear learnings come only from clear hypotheses.</a></em></p></li></ul><h1><strong>3. Experiment Design &#128202;</strong></h1><p>How are we designing this experiment to test our hypothesis? Which group of people are in the experiment? How does the Control vs Variant experience look like?</p><p>How long will we run the experiment? Has effort and time to build the test been identified? Ensure test does not have a collision.</p><p>Assets (add links)</p><ul><li><p>Jira issue</p></li><li><p>Design mockups</p></li><li><p>Report that shows the current state of things</p></li><li><p>Figma or Miro comparing control vs treatment experience.</p></li></ul><h2><strong>3.1. Experiment Cohort/Target Audience &#127919;</strong></h2><p>Describe the user base we will target to run this experiment to, e.g.:&nbsp;</p><ul><li><p><strong>Markets:</strong> e.g. "US only", "All English", etc</p></li><li><p><strong>Customer type: </strong>New, Existing, Paid, Free, etc</p></li><li><p><strong>Signup pathways: </strong>All, Front of Site, Email, etc</p></li><li><p><strong>Tiers:</strong> Basic, etc.</p></li><li><p><strong>Other:</strong> (for example, users who did x action or users who have specific characteristics - whatever it may be)<br></p></li></ul><h2><strong>3.2. Test Length &#9203;</strong></h2><p>Given how we are measuring the test, and the size of the audience, how long will it take us to get to significance?&nbsp;</p><p>Use a <em><a href="https://vwo.com/ab-split-test-duration/">test duration estimator</a></em> to state how long it will take for the experiment to reach statistical significance.</p><p><em>Example:Test should take 2 weeks to get to significance</em></p><h1><strong>4. Metrics &#128200;</strong></h1><p>Describe exactly which metrics we are measuring to determine results for the test. List primary and secondary metrics if applicable. Please include baseline metrics (e.g. the control). Describe how we will calculate the metric if it is not a standard metric or if we are inferring from behavior. For example, if the goal is conversion improvement but we are using clicks on the plan page options as a leading indicator, spell that out. Any other metrics that could be cannibalized should be considered and documented</p><p><em><strong>Example:</strong>&nbsp;</em></p><p><em><strong>Primary test metric</strong>: Conversion rate for New Users. Current baseline is x%</em></p><p><em><strong>Secondary metrics:</strong> Plan mix, Term mix, etc. Current baselines are...&nbsp;</em></p><p>Other metrics to consider or be aware of for this test that might be impacted for analysis.</p><p>You can also include links to relevant reports, or other artifacts that may help whoever is performing or interpreting the analysis.&nbsp;</p><h2><strong>4.1. Instrumentation &#128225;</strong></h2><p>Do we have all the event logging in place to gather these metrics? What events and properties are missing?</p><p>Provide a list of tracking which will be used for the experiment so it can be verified both during development and post release by examining data reported. It can be specific guidelines and let developers format the name or it can be explicit names required.&nbsp;</p><p>Please provide <strong>when</strong> these should fire.</p><h1><strong>5. Definition of Success &#127942;</strong></h1><p>We will determine that the variant is a winner if it&#8230; [increase/decrease the primary metric by &#8805; x%] higher than the control group at statistical significance.</p><h1><strong>6. Pre-Mortem &#9904;&#65039;</strong></h1><p>This is to help increase test preparation. Based on the potential results, what actions will we take?</p><ul><li><p><strong>Implementation plan</strong>: How will we take the experiment variation and turn it into permanent product experience if it wins?</p></li><li><p><strong>Iteration plan</strong>: What about our initial assumption has changed? Will we form a new hypothesis?</p></li><li><p><strong>Expansion plan</strong>: How will we <strong>apply the learnings</strong> from this experiment widely and double down?</p></li><li><p><strong>Holdout groups:</strong> Does it have potential for actual win to be different when made permanent product experience and should you use holdout groups or follow up data checks?</p></li></ul><h1><strong>7. Analysis/Results &#128202;</strong></h1><p>Our hypothesis turned out to be [correct / incorrect].</p><p>You can post a link to a spreadsheet or other document if results are captured elsewhere and link to funnels demonstrating user behavior, etc.&nbsp;</p><h1><strong>8. Learnings &#129504;</strong></h1><p>We learned that&#8230;&nbsp; What did we learn from running this experiment? How do these learnings impact the next steps? Tie them back to the pre-mortem test preparation work and share out the learnings widely. </p><p>When sharing out the learnings widely, it may help to categorize them i.e, high value vs informational.</p><p></p><p><strong>Additional Notes:</strong></p><ul><li><p>Feel free to copy and customize the <strong><a href="https://docs.google.com/document/d/12ZspIYxY2NnRM5wLaM-fK_O_1hI-ilX8SJbOS0JXQcE/edit?usp=sharing">Experimentation design template</a>.</strong></p></li><li><p>Experimentation is still expensive so ensure that one is needed (This is a <strong><a href="https://blog.patreon.com/please-please-dont-a-b-test-that">good post</a></strong> that talks about when you might not need one).</p></li><li><p>You might also want to evaluate if the test is complex enough that you need to run an <strong>A/A test</strong>. (Test set ups in <strong>new surfaces</strong> or <strong>more than one surface</strong> often require both preplanning and an a/a test run to ensure data collection is set up for analysis)</p></li><li><p>You can create an experiment writeup document using the template and complete steps 1-6 before development begins. </p></li><li><p>Depending on the documentation tools you use, eg. Confluence or Notion, you can enable creating this documentation using this template with the click of a button.</p></li></ul><p>This template is continually evolving, and your input is appreciated. Feel free to share any suggestions or comments you may have, and enjoy using it!</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.growthengineering.xyz/p/experimentation-design-and-documentation/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.growthengineering.xyz/p/experimentation-design-and-documentation/comments"><span>Leave a comment</span></a></p>]]></content:encoded></item><item><title><![CDATA[Growth Engineering Newsletter]]></title><description><![CDATA[Welcome to "Growth Engineering", deep dives on all things Growth Engineering! &#128640;]]></description><link>https://www.growthengineering.xyz/p/introducing</link><guid isPermaLink="false">https://www.growthengineering.xyz/p/introducing</guid><dc:creator><![CDATA[Satheesh Kumar]]></dc:creator><pubDate>Tue, 26 Sep 2023 23:40:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4730ca5-9d69-4ce0-a16e-13ff811cee9f_500x500.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In a space that's constantly evolving, staying ahead of the curve is essential for success. This newsletter is dedicated to exploring all things growth engineering, providing valuable insights to empower you on your path to driving business impact in your organizations.</p><p>&#127775; What to Expect &#127775;</p><p><strong>&#128640; Growth Engineering:</strong> Dive deep into innovative engineering systems and strategies that fuel growth.</p><p><strong>&#128736;&#65039; Growth Processes:</strong> Uncover the proven methodologies, frameworks &amp; tools that empower cross-functional growth teams.</p><p><strong>&#127760; Growth Teams</strong>: Uncover the secrets of building and nurturing experiment driven growth-focused teams.</p><p><strong>&#128161; Actionable Tips</strong>: Receive practical &amp; actionable tips that you can implement immediately.</p><p>&#128198; <strong>Updates</strong>: Expect our newsletter right in your inbox, keeping you informed and inspired.</p><p><em>If you've already subscribed, THANK YOU. That means the world to me. <a href="https://www.growthengineering.xyz/?showWelcome=true">Subscribing to the newsletter is free and incredibly simple</a></em></p><p><em>Let's grow together! &#127793;&#128640; </em></p>]]></content:encoded></item></channel></rss>