Mike Reust

Meet our writer
Mike Reust
President of Retail, Betterment
I’m a product minded technologist with a passion for building and delivering products to help improve people’s lives. At Betterment, I'm focusing my passion and energy on personal finance, helping craft software that wrangles the complexity of the American financial system. We’re building smarter, efficient money management for everyone.
Articles by Mike Reust
-
The Evolution of the Betterment Engineering Interview
Betterment’s engineering interview now includes a pair programming experience where ...
The Evolution of the Betterment Engineering Interview Betterment’s engineering interview now includes a pair programming experience where candidates are tested on their collaboration and technical skills. Building and maintaining the world’s largest independent robo-advisor requires a world-class team of human engineers. This means we must continuously iterate on our recruiting process to remain competitive in attracting and hiring top talent. As our team has grown impressively from five to more than 50 engineers (and this was just in the last three years), we’ve significantly improved our abilities to make clearer hiring decisions, as well as shortened our total hiring timeline. Back in the Day Here’s how our interview process once looked: Resumé review Initial phone screen Technical phone screen Onsite: Day 1 Technical interview (computer science fundamentals) Technical interview (modelling and app design) Hiring manager interview Onsite: Day 2 Product and design interview Company founder interview Company executive interview While this process helped in growing our engineering team, it began showing some cracks along the way. The main recurring issue was that hiring managers were left uncertain as to whether a candidate truly possessed the technical aptitude and skills to justify making them an employment offer. While we tried to construct computer science and data modelling problems that led to informative interviews, watching candidates solve these problems still wasn’t getting to the heart of whether they’d be successful engineers once at Betterment. In addition to problems arising from the types of questions asked, we saw that one of our primary interview tools, the whiteboard, was actually getting in the way; many candidates struggled to communicate their solutions using a whiteboard in an interview setting. The last straw for using whiteboards came from feedback provided by Betterment’s Women in Technology group. When I sat down with them to solicit feedback on our entire hiring process, they pointed to the whiteboard problem-solving dynamics (one to two engineers sitting, observing, and judging the candidate standing at a whiteboard) as unnatural and awkward. It was clear this part of the interviewing process needed to go. We decided to allow candidates the choice of using a whiteboard if they wished, but it would no longer be the default method for presenting one’s skills. If we did away with the whiteboard, then what would we use? The most obvious alternative was a computer, but then many of our engineers expressed concerns with this method, having had bad experiences with computer-based interviews in the past. After spirited internal discussions we landed on a simple principle: We should provide candidates the most natural setting possible to demonstrate their abilities. As such, our technical interviews switched from whiteboards to computers. Within the boundaries of that principle, we considered multiple interview formats, including take-home and online assessments, and several variations of pair programming interviews. In the end, we landed on our own flavor of a pair programming interview. Today: A Better Interview Here’s our revised interview process: Resumé review Initial phone screen Technical phone screen Onsite: Technical interview 1 Ask the candidate to describe a recent technical challenge in detail Set up the candidate’s laptop Introduce the pair programming problem and explore the problem Pair programming (optional, time permitting) Technical interview 2 Pair programming Technical interview 3 Pair programming Ask-Me-Anything session Product and design interview Hiring manager interview Company executive interview While an interview setting may not offer pair programming in its purest sense, our interviewers truly participate in the process of writing software with the candidates. Instead of simply instructing and watching candidates as they program, interviewers can now work with them on a real-world problem, and they take turns in control of the keyboard. This approach puts candidates at ease, and feels closer to typical pair programming than one might expect. As a result, in addition to learning how well a candidate can write code, we learn how well they collaborate. We also split the main programming portion of our original interview into separate sections with different interviewers. It’s nice to give candidates a short break in between interviews, but the main reason for the separation is to evaluate the handoff. We like to evaluate how well a candidate explains the design decisions and progress from one interviewer to the next. Other Improvements We also streamlined our question-asking process and hiring timeline, and added an opportunity for candidates to speak with non-interviewers. Questions Interviews are now more prescriptive regarding non-technical questions. Instead of multiple interviewers asking a candidate about the same questions based on their resumé, we prescribe topics based on the most important core competencies of successful (Betterment) engineers. Each interviewer knows which competencies (e.g., software craftsmanship) to evaluate. Sample questions, not scripts, are provided, and interviewers are encouraged to tailor the competency questions to the candidates based on their backgrounds. Timeline Another change is that the entire onsite interview is completed in a single day. This can make scheduling difficult, but in a city as competitive as New York is for engineering talent, we’ve found it valuable to get to the final offer stage as quickly as possible. Discussion Finally, we’ve added an Ask-Me-Anything (AMA) session—another idea provided by our Women in Technology group. While we encourage candidates to ask questions of everyone they meet, the AMA provides an opportunity to meet with a Betterment engineer who has zero input on whether or not to hire them. Those “interviewers” don’t fill out a scorecard, and our hiring managers are forbidden from discussing candidates with them. Ship It Our first run of this new process took place in November 2015. Since then, the team has met several times to gather feedback and implement tweaks, but the broad strokes have remained unchanged. As of July 2016, all full-stack, mobile, and site-reliability engineering roles have adopted this new approach. We’re continually evaluating whether to adopt this process for other roles, as well. Our hiring managers now report that they have a much clearer understanding of what each candidate brings to the table. In addition, we’ve consistently received high marks from candidates and interviewers alike, who prefer our revamped approach. While we didn’t run a scientifically valid split-test for the new process versus the old (it would’ve taken years to reach statistical significance), our hiring metrics have improved across the board. We’re happy with the changes to our process, and we feel that it does a great job of fully and honestly evaluating a candidate’s abilities, which helps Betterment to continue growing its world-class team. For more information about working at Betterment, please visit our Careers page. More from Betterment: Server Javascript: A Single-Page App To…A Single-Page App Going to Work at Betterment Engineering at Betterment: Do You Have to Be a Financial Expert? -
Meet Blazer: A New Open-Source Project from Betterment (video)
While we love the simplicity and flexibility of Backbone, we’ve recently encountered ...
Meet Blazer: A New Open-Source Project from Betterment (video) While we love the simplicity and flexibility of Backbone, we’ve recently encountered situations where the Backbone router didn’t perfectly fit the needs of our increasingly sophisticated application. To meet these needs, we created Blazer, an extension of the Backbone router. We created an open-source project called Blazer to work as an extension of the Backbone router. All teams at Betterment are responsible for teasing apart complex financial concepts and then presenting them in a coherent manner, enabling our customers to make informed financial decisions. One of the tools we use to approach this challenge on the engineering team is a popular Javascript framework called Backbone. While we love the simplicity and flexibility of Backbone, we’ve recently encountered situations where the Backbone router didn’t perfectly fit the needs of our increasingly sophisticated application. To meet these needs, we created Blazer, an extension of the Backbone router. In the spirit of open-source software, we are sharing Blazer with the community. To learn more, we encourage you to watch the below video featuring Betterment’s Sam Moore, a lead engineer, who reveals the new framework at a Meetup in Betterment’s NYC offices. Take a look at Blazer. https://www.youtube.com/embed/F32QhaHFn1k -
One Massive Monte Carlo, One Very Efficient Solution
We optimized our portfolio management algorithms in six hours for less than $500. Here’s ...
One Massive Monte Carlo, One Very Efficient Solution We optimized our portfolio management algorithms in six hours for less than $500. Here’s how we did it. Optimal portfolio management requires managing a portfolio in real-time, including taxes, rebalancing, risk, and circumstantial variables like cashflows. It’s our job to fine-tune these to help our clients, and it’s very important we have these decisions be robust to the widest possible array of potential futures they might face. We recently re-optimized our portfolio to include more complex asset allocations and risk models (and it will soon be available). Next up was optimizing our portfolio management algorithms, which manage cashflows, rebalances, and tax exposures. It’s as if we optimized the engine for a car, and now we needed to test it on the race track with different weather conditions, tires, and drivers. Normally, this is a process that can literally take years (and may explain why legacy investing services are slow to switch to algorithmic asset allocation and advice.) But we did things a little differently, which saved us thousands of computing hours and hundreds of thousands of dollars. First, the Monte Carlo The testing framework we used to assess our algorithmic strategies needed to fulfill a number of criteria to ensure we were making robust and informed decisions. It needed to: Include many different potential futures Include many different cash-flow patterns Respect path dependence (taxes you pay this year can’t be invested next year) Accurately test how the algorithm would perform if run live. To test our algorithms-as-strategies, we simulated the thousands of potential futures they might encounter. Each set of strategies was confronted with both bootstrapped historical data and novel simulated data. Bootstrapping is a process by which you take random chunks of historical data and re-order it. This made our results robust to the risk of solely optimizing for the past, a common error in the analysis of strategies. We used both historic and simulated data because they complement each other in making future-looking decisions: The historical data allows us to include important aspects of return movements, like auto-correlation, volatility clustering, correlation regimes, skew, and fat tails. It is bootstrapped (sampled in chunks) to help generate potential futures. The simulated data allows us to generate novel potential outcomes, like market crashes bigger than previous ones, and generally, futures different than the past. The simulations were detailed enough to replicate how they’d run in our live systems, and included, for example, annual tax payments due to capital gains over losses, cashflows from dividends and the client saving or withdrawing. It also showed how an asset allocation would perform over the lifetime of an investment. During our testing, we ran over 200,000 simulations of 12 daily level returns of our 12 asset classes for 20 year's worth of returns. We included realistic dividends at an asset class level. In short, we tested a heckuva a lot of data. Normally, running this Monte Carlo would have taken nearly a full year to complete on a single computer, but we created a far more nimble system by piecing together a number of existing technologies. By harnessing the power of Amazon Web Services (specifically EC2 and S3) and a cloud-based message queue called IronMQ we reduced that testing time to just six hours—and for a total cost of less than $500. How we did it 1. Create an input queue: We created a bucket with every simulation—more than 200,000—we wanted to run. We used IronMQ to manage the queue, which allows individual worker nodes to pull inputs themselves instead of relying on a system to monitor worker nodes and push work to them. This solved the problem found in traditional systems where a single node acts as the gatekeeper, which can get backed up, either breaking the system or leading to idle testing time. 2. Create 1,000 worker instances: With Amazon Cloud Service, we signed up to access time on 1,000 virtual machines. This increased our computing power by a thousandfold, and buying time is cheap on these machines. We employed the m1.small instances, relying on the quality of quantity. 3. Each machine pulls a simulation: Thanks the the maturation of modern message queues it is more advantageous and simple to orchestrate jobs in a pull-based fashion, than the old push system, as we mentioned above. In this model there is no single controller. Instead, each worker acts independently. When the worker is idle and ready for more work, it takes it upon itself to go out and find it. When there’s no more work to be had, the worker shuts itself down. 4. Store results in central location: We used another Amazon Cloud service called S3 to store the results of each simulation. Each file — with detailed asset allocation, tax, trading and returns information — was archived inexpensively in the cloud. Each file was also named algorithmically to allow us to refer back to it and do granular audits of each run. 5. Download results for local analysis: From S3, we could download the summarized results of each of our simulations for analysis on a "regular" computer. The resulting analytical master file was still large, but small enough to fit on a regular MacBook Pro. We ran the Monte Carlo simulations over two weekends. Keeping our overhead low, while delivering top-of-the-line portfolio analysis and optimization is a key way we keep investment fees as low as possible. This is just one more example of where our quest for efficiency—and your happiness—paid off. This post was written with Dan Egan.