Michelle Beauchemin
Industrial Engineer, Texas Instruments
Meg Hermes
Customer Alliance Manager, JMP
Texas Instruments (TI) is a leading US-based semiconductor company. With more than 80,000 analog and embedded processing chip products, TI has one of the most comprehensive semiconductor portfolios in the world. The company’s internal manufacturing capacity has supported decades of growth.
Industrial engineer Michelle Beauchemin joined TI in 2015 as a process engineer. After moving into her current role in industrial engineering, she identified an opportunity to expand the use of statistical tools and automation to drive meaningful improvements at TI’s factory in South Portland, Maine.
Michelle sat down to chat with JMP Customer Alliance Manager Meg Hermes during the 2023 Discovery Summit Americas in Palm Springs, California.
Meg: You said you’ve been using JMP from your very first day at Texas Instruments.
Michelle: I was doing mainly data exploration and a small bit of DOE work in that role though at the time, I didn’t have a full understanding of JMP or what it could do. It was only when I moved into an industrial engineering role about a year ago that I decided to really dive in and learn.
There was a specific project I wanted to work on too – validating part of our capacity model. At first, it looked like my options were going to be Excel or Spotfire, but that wasn't going to work. JMP fit every need that I had for that project being customizable.
Having the need and use case for JMP, I dove in headfirst. There have been many days since that I don't even open Excel, which I could not have fathomed if you asked me a year ago! I rely on JMP for almost everything now.
Meg: So, tell me about model validation. How did it all start?
Michelle: Our capacity model – which is currently an Excel model – is based on the processing speed of different tools. Each toolset runs a variety of recipes, and we store the speed and throughput data – essentially parts per hour – for each recipe for each toolset. Those throughput values change over time, whether it's the result of the tool itself degrading, or because we've made a purposeful change, and the model is no longer accurate if we don’t keep it updated as actual processes change. And sometimes we add new recipes and then need to document those speeds as well.
Essentially, it's very difficult to document every change for everything that happens – and to make sure that the model has the right values at all times. Figuring out the throughput for each recipe historically for actual process runs – and then taking that information and plotting distributions by recipe – allows me to see variation over time and across the full data set.
It’s important to include a high volume of historical processing data during the throughput validation process to make sure we capture the tool at its best possible performance, not just average performance. Elsewhere in the model, we have the inefficiencies modeled as well. But if you're combining your average process speed with your known inefficiencies, you're double counting the inefficiencies. There is always some amount of manual review required to make sure I'm not including incorrect runs or non-standard runs. And then methodically going through each recipe to pick where to draw the line of what we want to model. Scripting in JSL has allowed me to automate the process of data wrangling and constructing the results into an interactive dashboard, and I can repeat the process for any toolset with just a few minutes setting up the script.
I first did this validation for one toolset, and it revealed that we had a lot more capacity than we thought we did. We didn’t need to put as many resources toward improving throughput as we thought. And I could demonstrate it to the process and industrial engineering teams with data.
Being able to back up throughput validation data with convincing visualizations is really impactful because it’s a big change to update the numbers. You have to convince a lot of people.
Meg: What impact does a finding like that have on the business?
Michelle: Before we updated the numbers, we were planning to devote time and resources towards urgently increasing capacity for that toolset. And getting buy-in from our management to adjust the modeled capacity based on throughput analysis meant we could allocate those resources towards more impactful improvement projects. We didn’t have to go down that path of a process change that might have impacted quality.
So, it was a big decision. I had to feel confident to say, “Based on this analysis, we have enough capacity to run the planned starts mix without making changes to this process.”
Meg: What did you do to ensure your management also shared that confidence in the decision?
Michelle: I started by sharing my analysis with process engineers and the manager for the group with that toolset. He's a big JMP proponent. He's very skilled at scripting, very data-driven. So, he was interested in seeing what I presented. He understands the value of constructing the analysis from historical process data, and he's my advocate in that way. He said, “This makes a lot of sense. This is really good. You should keep doing this.”
And I trusted him enough to adjust my approach based on his feedback. For example, he wanted to look at a different load pattern before making a decision, and when I came back with that data, we adjusted our numbers.
Having his support for the project, advocating that it be shown to a wider audience, and getting the opportunity to present to management, was so beneficial. That wider audience also saw the opportunity to expand this type of analysis to other toolsets and broader usage.
Meg: It’s so important to have an advocate. It can make all the difference, really.
Michelle: It was really encouraging to have management support at every level to further propel the work I was doing. They very quickly understood the value and supported it.
Meg: We all want to work for a company like that! I hear people say all the time “Our leadership is very traditional” and it’s inspiring to hear how TI has advocates for analytics at all levels of the company. That’s what drives innovation forward.
Michelle: The advocacy and support I’ve received from managers since I first shared my analysis has allowed me the time and space to keep working on this project and keep progressing and expanding it out.
I’ve gotten requests from several people to repeat the analysis for other toolsets when it comes time to evaluate capacity. Now they ask: “Can we validate the model?” It’s been really nice to be trusted when I come back to the team with an analysis that says, “The model is correct” or “We do need to do some activities to increase capacity for this toolset.”
I have the data to visually back up this conclusion and the trust in my process as well. Not to mention, I think people trust the analysis when they see JMP plots. Sharing data in that format earns trust.
Meg: What’s the end goal for model validation at scale – and how do you get there?
Michelle: The end goal is to have something that does monitoring as well as validation.
I have to create a validation tool that someone other than me can use, because right now I can use it, but there may be some assumptions I'm making, or uncertainties [I can account for] since I know how it's built. I want to build in some tools to help other users identify which points to look at, which points to exclude, and how to understand the difference between the two.
From then on, it's ongoing monitoring to detect throughput changes when they happen, rather than validation that looks back at historical data.
Meg: A complete shift in mindset, from reactive to proactive. That’s really exciting.
Michelle: It is. I feel really fortunate that I had a good amount of time to build something that I could demonstrate in order to get buy-in. Now it's like, “Okay, this is your project. You have free reign to work on this when you need to.”
I just love working on it too! It feels weird to be working on something in my job, and stop to think “I shouldn't be doing this. I'm loving it too much! Something must be wrong.” But then you realize, “No, this is exactly what I should be doing.”
Meg: You have your dream job – all because you took an idea and ran with it! That story should inspire other people to level-up their data skills and explore what is possible when you bring a bit more statistical power to the table.
Michelle: That’s certainly the goal. We did a JMP Inspire event at our Maine fab and I so enjoyed being able to help plan the event and bring it to my factory to show everyone, “You're using JMP for small things here and there. But did you realize all the other things you can do with it?”
I want to encourage everybody at my site to use JMP for things they didn't think to use it for previously. Just try going a couple of days without opening Excel! Bring everything into JMP and give it a go.
And I have to say, JMP makes it so easy to get started with scripting. Once you break that barrier of intimidation and see how much a script can do, it just opens up so many possibilities. I hope I can encourage others to have that same realization.
Meg: You mentioned the JMP Inspire event and I’m sure TI offers other JMP training opportunities as well. But training requires a significant time investment. What case do you make to management that training is worth engineers’ time?
Michelle: Our management is already very supportive of JMP and using statistics, so I don’t see it as “making a case” but rather “here's an opportunity that people can get excited about.”
We use JMP so much – even just for visualization – so it's very visible to management. There's a lot of support from the top down to learn and become more proficient with JMP, and to get new hires up to speed.
This week at Discovery Summit I’ve been hearing a lot of talk about “building a culture of data literacy.” But I think Texas Instruments is there already. The challenge that we have is not to introduce analytics, but to make sure that we maintain [the optimal balance of domain and statistical knowledge]. As people retire, we don't want to lose the technical knowledge they have.
When you’re just starting out in your career and you hear people talk about analytics, you don't fully understand the technical application. It’s easy to repeat the buzzwords without understanding the meat behind it. So I think the challenge we have is reintroducing the true technical knowledge as the foundation of data literacy. I have a long way to go in my own development of statistical knowledge, and Discovery Summit has helped highlight how important it is to continue along that path.
Meg: Everyone is at a different place on the analytics maturity journey. You may be farther along, and that doesn’t mean there are no challenges – they’re just different challenges.
Michelle: I’m gathering as well that some companies have statisticians or data analysts who are there to consult and help build experiments and analyze results. That's not in any way what we do. Everybody at TI is responsible for their own experimental design and analysis. We all have to have a baseline level of understanding in order to do our day-to-day.
That’s not to say that we don’t have experts – we definitely do. We do have statistics experts who are happy to consult and help if you have questions, but they're not designated to that role. They're a process engineer or manager who also has a lot of experience in statistics.
Meg: That structure lets you raise the statistical capability of the whole organization.
Michelle: There's so much support from the management side of TI for analytics at our site. And I hope I can contribute to sustaining that analytics culture – coming to Discovery Summit and then taking what I learned back to the fab – to support my colleagues and give them the confidence to try something new. We just needed to motivate people to go deeper into JMP, and I think it will have a lot of results.
Meg: Can you give me an example of where you feel there is an opportunity to get more value out of JMP?
Michelle: Automation is a big thing. I haven't played around much with Workflow Builder yet, but I see a lot of opportunity there for those who want to automate their tasks but are not comfortable with scripting. We all have repetitive analyses and some people just aren't aware of how easy it could be to automate those tasks, starting with Workflow Builder.
Once you automate one project, you start to see everywhere else you could use automation, and that is really exciting. I experienced that feeling when I was automating the model validation I'm working on now. As soon as I started, I was like, “Oh my gosh, I can't wait to do this for all these other projects I've been working on!”
Meg: Where does your passion come from? What inspires you to approach your work with so much energy?
Michelle: Day-to-day, I really love coding. But outside of the day-to-day, I just feel really passionate about the factory. I grew up half an hour south of [the TI fab] and every time I’d drive by on the highway, I'd see the sign. When I was in middle school, I toured the factory as part of a summer camp and was like, “I want to work here.” And now I do!
When you're in a process engineering role, it can be hard to see the bigger picture because you're so focused on what's happening right now; “I have to get this lot moving,” “I have to get this qual to pass,” or something like that. But moving into industrial engineering, I got that spark back; “I love this factory.”
To feel like I have a role in the factory in the way that I do is incredibly exciting. I love seeing the bigger picture of the factory’s success and feeling like I'm part of a bigger community.
Meg: I get it! I feel the same way about JMP and the community we have here of people who are passionate about using data to solve problems.
Michelle: I was just talking with my coworker last week, asking: “Do you enjoy doing puzzles because you like the end result? Or do you like doing puzzles because you like putting the pieces together?” And his response was, “Oh, I like to see the end result.”
I got to thinking: For me, it’s not necessarily the end result. I like the searching and the reward that you get when you find that little piece, and that feels a lot like coding to me. You have an idea of something you want to do. You start with just a couple of lines. Test it, and maybe it doesn't work. Maybe you have to revise something and then it works. And then that's just one step in the process.
I get joy out of every little piece of discovery and success along the way. The process of creating is just as exciting as the creation you make in the end. I’m lucky to be involved both in the process and in sharing the result and seeing what it does.