When most people think of testing, they think of increasing the stability of the product. The tests will let you know when something is broken and allow you to fix it before delivering a new feature. More tests means the product has fewer bugs right? Well I'm here to tell you that's not quite always the case, and how I believe it is often the other way around. Tests are only valuable when you already have stability.
I have a background in testing. As an intern in college, I started off in the automation testing team because it's where I was able to get a decent job that paid well and allowed me to work year-round while also being a full-time student. In hindsight, I wish I would have gotten a larger variety of experience with varying my internship opportunities, but that's a story for another day. That turned into a full-time job offer right out of college, in which I got even deeper into more testing paradigms, frameworks, tools, etc. That all was at a large company that had very stable parts of their product offerings, and a dedicated testing team that had built a sophisticated testing suite. It was basically an entire product in itself.
Fast forward to today, and I'm on the development side at a startup company. We don't have a dedicated testing team (well we did, but that guy moved to development for reasons you are about to read, AKA it was me). We have contracted a QA team since then, but I'm going to set aside that situation for the sake of this article (though my points still apply).
While I was doing automation testing at my current company, there were ten other developers whose code I was trying to test. Not an easy pace to keep up with, but that wasn't even the primary problem. The product was changing at a fundamental level faster than I could update the existing tests, while also writing new ones for the new features. Time and energy could have been better spent elsewhere, which is one of the reasons I ended up in development (the other being a desire for personal growth). I figured that I could make a larger impact on the quality of the software by being a part of it's development upfront, rather than catching things after the fact. I also started reviewing almost every developer's code to try to stabilize the development of the product with consistent patterns. Similarly, I started being a part of conversations with the design and product team ahead of time and insisted on more consistency, thereby increasing the stability of the various parts of the application. How well it worked is probably better determined by others, but I know that I learned a ton in the process.
Until you have a stable product (usually defined by product market fit, a core set of features, and a solid customer base), tests can be futile, especially from an automation perspective. You waste time (and money in engineers' salaries) that would be better spent developing new features to test out in the market. Now that doesn't mean you forego stability and testing altogether, as a poor quality product can cause lost faith from early customers. What it does mean is that you should pick your stable parts closely. I read a book that talked about designing software architecture for 10x your current scale (and no more). I think that's also true for feature design. Don't try to design a feature that will serve everyone until you know it will serve a vastly smaller subset of customers. That smaller customer base will help you figure out the kinks and give you the early money you need to build a sustainable product that lasts. Once you have that lasting core feature set, then you are ready for stability and the testing that comes along with it.