Joe Shure

Meet our writer
Joe Shure
Former Senior Product Manager, Betterment
Joseph and his team drive efficiency and delight in building tools for Betterment’s customer-facing teams. Earlier, he served hundreds of clients as a financial planner at LearnVest (acquired by Northwestern Mutual) before joining the team that built that company's financial planning algorithm. He holds a B.A. in History from Rutgers University.
Articles by Joe Shure
-
Test Big or Go Home
Running lots of small split-tests made my team feel productive and gave us plenty to talk ...
Test Big or Go Home Running lots of small split-tests made my team feel productive and gave us plenty to talk about. But it failed to answer the questions that mattered most. On a bright winter afternoon last year, I looked out the conference room window. Clean, blue sky stretched over the wall of mid rise buildings across Manhattan’s 23rd Street. I listened passively as the hair-splitting arguments set in. “Well…I’m not sure we can come to a conclusion on this test—it’s only been running for 12 days.” “Well, I’m not sure it’s the placement of the button that was bad—it might just be the copy we used in this version of it.” “Well, I’m not sure this new page design is what worked—it might just be the image we used in this version of it.” I was the product manager on the growth team responsible for getting visitors to Betterment’s website to sign up and deposit money with us. We’d host a meeting every Friday afternoon to review split-tests in progress in the hope of calling them—that is, to determine whether recent changes we’d made to the website or signup flows were driving more users to open accounts and make deposits. Fancying ourselves data-driven, we sought to guard against flawed observations that might lead us to a wrong conclusion. Our approach was rigorous but exhausting. As I zoned out and looked at the sky, I thought about something Katherine Kornas, a product leader who had joined the company a few months earlier—and to whom I reported at the time—shared at a recent product team meeting. “It’s not a failure when your split test loses,” she’d said, “it’s a failure when your split-test is inconclusive.” When you change your product in the hope of spurring signups and it actually drives signups down, she shared, you’ve at least learned what doesn’t work. When you change your product and nothing happens, you’ve learned almost nothing about users’ needs or behaviors. At that moment, learning about what users wanted was more important than ever. We were still figuring out how best to draw site visitors into a new type of product. Smart Saver was an account that was filled with money market and government bond funds and designed to earn a modest but reliable return. Having spent years making our name as the place to invest for the long-term, Betterment had developed a product built for the kind of short-term dollars that one might otherwise store in a savings account. Smart Saver was the precursor to the cash management products we’d go on to launch in the ensuing year: Cash Reserve and Checking. My team and I wondered how to feature Smart Saver in our signup flows and on our website. Should it sit front-and-center on the homepage? That would be great for those who arrived in search of that product, but it might confuse visitors who arrived in the hope of rolling over a 401(k). How could we talk about this particular product without obscuring the larger story of what Betterment was all about? No one knew for sure, but Katherine had the expertise to guide us toward the answer. “We need to think beyond the page,” she told me in one of our weekly one-on-one meetings. Katherine could sense that the split-testing approach I’d managed—with its small tests that changed only a single thing at a time—would never crack the bigger questions around how best to sell our burgeoning cash products. She urged me to think bigger, about entire flows, or even the broader swath of the user experience. That was tough to hear. I was proud of running dozens of tests and reviewing them ruthlessly. It made me feel like the data-driven product manager I thought I was supposed to be. Even if running numerous tests led to a lot of arguments in our tedious review meetings, it also gave me a lot to talk about—in release notes, at team meetings, and in reports to the broader growth team. As I re-read my notes from the time, though, I can see that for all my talking, I shared little in the way of strategy-shaping insights. Katherine was right to call for fewer, but better, tests. Instead of starting with an idea for a feature or design change, she said, I should start with a big question: what's the thing we’re trying to learn? What change to the user’s experience—ideally across multiple stages of their journey—would teach it to us? How might our strategy for the coming months change if the test wins? What about if it loses? The big test We went on to test a new, vastly different experience for users who arrived at our site with an interest in Smart Saver. We sent them down a signup flow that was unique for its silence on our investing accounts. We wanted to avoid mentioning any product other than the one the user came for. When a customer finished signing up and arrived at their dashboard, we affirmed to them that they had opened a Smart Saver account and invited them to fund it. This new approach earned strong results. Strong enough, in-fact, to beat back the kind of hair-splitting questions that had arisen in test review meetings of the past. It established a pattern that Betterment still uses today. If a site visitor seems intent on opening a certain kind of account—say, Cash Reserve, an IRA or a general investing account—we draw into a flow that focuses tightly on getting them settled in. We wait until later to tell them what else about Betterment they might enjoy. A test is likely big “enough” if it earns buy-in from stakeholders that X result should indicate Y change in your approach. Imagine you’ve replaced your homepage hero image with a picture of a dog. As the test gets underway, you tell your colleagues from marketing, design and engineering that if the dog picture drives signups, you’ll display only pictures of dogs throughout your public-facing pages, your signup flows and even in-app. It would be easy for anyone in the room to raise doubts. “Well, maybe dogs just work on the homepage, but not on other pages.” “Well, maybe it’s just because this dog is a Labrador, people might not like German shepherds.” “Well, maybe people just like four-legged animals. Can we try a cheetah?” You’ll stand a better chance of winning the room if you suggest a more comprehensive test: placing pictures of various dogs throughout the user journey—not just in one spot. You might propose that, if the test drives not only sign-ups but also deeper levels of engagement, it will show that dogs are key to acquisition and retention. The key is to strike an agreement with your stakeholders in advance about what result you’re looking for and, more importantly, how it would change your strategy for the months to come. In thinking bigger about split-tests, my team and I learned how best to draw users into our service. Personally, I learned something even more important. While one should certainly be afraid of drawing the wrong conclusion from a change to the user experience, one must also beware of making changes that are so small as to yield no conclusion at all. -
Why Product Managers Must Resist “Quick Wins”
Tackling unplanned tasks from stakeholders made me feel helpful, but it bogged my team ...
Why Product Managers Must Resist “Quick Wins” Tackling unplanned tasks from stakeholders made me feel helpful, but it bogged my team down. Here’s how I narrowed our focus to the most impactful tasks. “How hard would it be... …to add a section for notes?” …to make the button a little bigger?” …to add a banner to the homepage?” For a long time, I loved this question. My quick response of, “Oh, it shouldn’t be too hard!” felt like a cheap ticket out of a feeling I’ve often faced as a product manager: that I was failing to do enough for the teams that depend on me. I sat in a small, fluorescent-lit room during a one-on-one with a colleague, also named Joe, who was a leader in our company’s customer service team. Building tools for customer service was one of my team’s responsibilities, so I’d listen as Joe looked at his laptop and read from a long list of indignities his team faced–—disorganized screens, drop-down menus that lacked the one item they always seemed to need, and more. As I gazed down at the foot-long log that, for some reason, sat sideways between the two chairs in the small meeting room, a knot formed in my stomach. What kind of job was I doing if Joe’s team had to put up with so many menial, seemingly fixable problems? I snapped out of it when Joe asked a question: How hard would it be to redesign the instructions a customer sees when transferring a retirement account to Betterment? “Oh, that? Um, that should be pretty easy!” I said. It continued. At UX review sessions and in chat messages, stakeholders would ask how hard it would be to add just one more thing. I would smile and add the task to our backlog or to the upcoming milestone of a project. I was helping! At least, I thought I was. It’s not about difficulty. Meanwhile, projects were taking longer than we’d expect. I’d feel woozy looking down from the swaying height of our backlog. Small things had gained a stubborn habit of turning into one big thing. I realized my love of quick wins, of feeling helpful, was doing no one any favors. I’ve since changed my tune. When someone asks, “How hard would it be…” I respond with a question of my own. “How big would it be?” How much of an impact would it make? How much time would it save? How much revenue would it bring? What I’ve begun to share with stakeholders is that our backlog is home to dozens of tasks that we’ll never get to, simple as they might be. They linger there because they would fail to make a big difference in my team’s pursuit of our higher goals. It’s no use (anymore) to advocate for a bit of work solely on the basis of how easy it would be to complete. That cuts both ways. Our roadmap is full of projects that we expect will make a big impact: projects that will help not only our customers, but also those of our colleagues who serve them. These are projects we eagerly pursue—even though they are hard.