Sitemaps
1of10

Next Video

Register to continue watching.

Create a free account to unlock this video.


OR


Submission confirms agreement to our Terms of Service and Privacy Policy.

Already a member? Login

Instructor

Amy Jo Kim

Game Designer, Social Architect, Startup coach

Transcript

Lesson: Accelerating Early Product with Amy Jo Kim

Step #2 Experiment: Tackle highest risks first

If you want to move faster and smarter, because that's my premise, everyone is in a hurry, and it's about moving faster and smarter — not just faster, and not just smarter — then the best way to do that is to start a testing loop very early, but have your first test be me finding. Before that, make a clear hypothesis, and there's 8 or 9 questions, like I've worked with clients and I have a list of 60 questions I ask them, I have a list of 15 questions I ask them. I go very in deep if it’s a big client and there's a lot of issues and we're integrating with an existing business model. But if it's stripped down, there's eight or nine key questions that you should ask yourself.

Who are my early customers? A good thing to say is, if I was going to run a pilot project of 50 to 100 people to test this idea, who would those people be? That's a good way to think of it. Work backwards from there. Who are my early customers? Who's my passionate early market? If you've got three hypotheses, write them down. Not who is everybody. A passionate early market is not women 30 tp 45 who recently had kids, for instance. That is not narrow enough. It's got to be driven by need and if you don't know what it is, you form a hypothesis. Just make a hypothesis. If you're wrong it's fine.

Then you say, “What unmet need do they have that I'm targeting, with what I'm doing? What is the driving of that need? Are they anxious about something?” And you try to describe it?

Okay, what's my solution? Bearing in mind that there might be multiple solutions to that unmet need for that market, you have one of many. What solution do I think I'm providing? Then, what value prop does my solution have that's going to make that person see that my thing meets their unmet need, and so those four things work together. That's really your product strategy.

Then you say, okay, what is my unfair advantage? What is it that me and my team do better than anybody? What is my passion such that if I got customer feedback that there was some other need, I would choose not to meet it because I need to stay rooted in my unfair advantage, my passion. Maybe you have some secret sauce you've developed, some technology. Maybe it's your knowledge, maybe it's your connections, maybe it's your stick-to-it-iveness, maybe it's the composition of your team. But you have some unfair advantage that's going to help you navigate your customer feedback and keep true North, while also pivoting.

How do you manage true North while pivoting? This is how you do it. You think about these questions, start iterating.

Next question, how do I know if I'm heading toward success? What are my early metrics? Are they subjective? Probably they should be early on. If you say, “Oh, I'm just going to get objective metrics,” first, you're on the wrong track. But what are those early metrics, is it MPS score, am I going to do an exit interview, am I going to do a pilot? Think about it. What are those early metrics? Not to tell me definitively, but that show me if I'm on the right track or not. What are they?

Then finally, taking all of that, what are the highest risk assumptions in all of that that I need to know the most about soonest? Is the highest risk assumption that my product meets a need? Is the highest risk assumption the technology behind my product? Is the highest risk assumption that I have to find the right channels to reach that audience? What is the highest risk assumption? Then you design your test to test your highest risk assumptions, the things that if they weren’t true, your whole premise crumbles, and you get that right. Then you move on from there.

I just told you from a high level what the MVP Canvas is. It is a one-page document that forces you and your team to ask those questions and have crisp hypotheses and then helps you prioritize the hypotheses around risk. So it's what Eric Ries preaches, it's what Laura Kline preaches, it's what many very smart lean start up people would preach, but from a different lens — from a product design lens, from a game design lens.

The difference is really that I'm not talking about the stuff in the business model canvas, this is like a subset. But it's stuff that it forces you to see that your solution is not the only solution to this problem. You decouple your solution from the problem and the audience, and then if you put them back together you have a very strong foundation.

I do online multiplayer experiences and they have a live team, right? So of course you keep testing, but the testing changes over time. At the beginning you're doing maybe informational interviews, I like doing speed interviews, it's an interesting technique. Then you're doing testing of something rough, maybe interactive prototypes, maybe you're testing that, and then you build something and then you test that for more user interface polish. User interface testing is different than concept testing. You do them in a different phase.

Then maybe you polish and then you're doing bug testing at some point, right? Then you release. Are you done? Probably not. There'll be some bugs everyone will find and then you'll need to deliver stuff because people will want a new feature and back you are through the whole thing again. It's a loop that you keep doing.

What's different is that the better you get at this, the more you know what to test when, and how to focus your testing and when to test the interface, and when to do concept testing and when to pull in your early adopters versus your everyday folks. Because you need to do all of that to do a real product. But in the products I work on, it never stops and that's because those are products that keep evolving.

Loading...