Testing should be treated as more than just a checkbox activity. It plays a critical role in the success of an ERP implementation, fosters greater user adoption, and supports the onboarding of new team members. In this episode of RPI Tech Connect, Principal Technical Consultant Jeremy Stoltzfus returns to the show to discuss the often-overlooked discipline of test management in ERP projects.
From building test scripts that reflect real business processes to ensuring training and readiness post–Go Live, Jeremy explores how RPI’s structured approach supports long-term success. He also introduces Dokimi, RPI’s proprietary test management solution—purpose-built to centralize and streamline everything from test script development and sequencing to assignment, tracking, and issue resolution.
Whether you’re planning an ERP cloud migration or simply need a better way to manage ongoing updates, this conversation reveals how smart test management strategies—and tools like Dokimi—are transforming how organizations plan, execute, and evolve their ERP environments.
Interested in listening to this episode on another streaming platform? Check out our directories or watch the YouTube video below.
Meet Today’s Guest, Jeremy Stoltzfus
Jeremy Stoltzfus is a Principal Technical Consultant who has been with RPI since January of 2017. Jeremy is a Certified Infor Developer with technology expertise on both Windows/MS-SQL and Unix/Oracle on-premise and Cloud platforms.
Jeremy’s focus on process improvement within multiple Infor CloudSuite configurations, security, development, upgrade, and data conversion projects has helped Infor clients make the most of their ERP investment. He ensures that a client’s business requirements can be achieved using Infor CloudSuite solutions.
Throughout his career with RPI, Jeremy has worked as a technical consultant, integration architect, systems analyst, programmer, and technical lead for various implementation and optimization projects. Jeremy is also very involved with RPI’s training initiatives and enjoys teaching multi-day workshops and presenting webinars and at Infor user groups.
Prior to joining RPI, Jeremy worked as an Infor Lawson system administrator for 15 years where he led significant upgrade and development efforts. He has always been an active participant in the Infor Lawson community.
Meet Your Host, Chris Arey
Chris Arey is a B2B marketing professional with nearly a decade of experience working in content creation, copywriting, SEO, website architecture, corporate branding, and social media. Beginning his career as an analyst before making a lateral move into marketing, he combines analytical thinking with creative flair—two fundamental qualities required in marketing.
With a Bachelor’s degree in English and certifications from the Digital Marketing Institute and HubSpot, Chris has spearheaded impactful content marketing initiatives, participated in corporate re-branding efforts, and collaborated with celebrity influencers. He has also worked with award-winning PR professionals to create unique, compelling campaigns that drove brand recognition and revenue growth for his previous employers.
Chris’ versatility is highlighted by his experience working across different industries, including HR, Tech, SaaS, and Consulting.
About RPI Tech Connect
RPI Tech Connect is the go-to podcast for catching up on the dynamic world of Enterprise Resource Planning (ERP). Join us as we discuss the future of ERPs, covering everything from best practices and organizational change to seamless cloud migration and optimizing applications. Plus, we’ll share predictions and insights of what to expect in the future world of ERPs.
RPI Tech Connect delivers relevant, valuable information in a digestible format. Through candid, genuine conversations and stories from the world of consulting, we aim to provide actionable steps to help you elevate your organization’s ERP. Whether you’re a seasoned professional or new to the ERP scene, our podcast ensures you’re well-equipped for success.
Tune in as we explore tips and tricks in the field of ERP consulting each week and subscribe below.
Transcript:
Chris Arey
Testing can make or break an ERP project, and most teams don’t realize how critical it is until it’s too late. In this episode of RPI Tech Connect, we’re talking tools, tactics, and modern solutions. Let’s get into it.
Welcome to another episode of RPI Tech Connect. I’m your host, Chris Arey, and today we’re tackling a topic that is foundational to ERP success, and that is testing. Whether you’re rolling out a new system or planning for long-term support, testing can make the difference between a smooth launch and weeks of triage.
Joining me is Jeremy Stoltzfus, Principal Technical Consultant here at RPI. Jeremy, welcome back sir. It’s been a minute. How are you?
Jeremy Stoltzfus
Well, thanks for having me again. For those who don’t know me, I’ve been with RPI for a little over eight years now. Prior to joining RPI, I worked at two customer sites as a Lawson system administrator, just kind of making my way up, starting with Lawson v7. I’ve been in the InforLawson space for a little over 20 years now. Since joining RPI, I’ve been focused on CloudSuite implementations for the past seven years.
Chris Arey
Very good. Thank you for spending this afternoon with me. I’m looking forward to talking about testing. I think it’s safe to say that everyone knows that it plays a big part of an ERP implementation, whether that is a full migration or a single application. From your perspective, what makes it so critical and why does it sometimes get overlooked?
Jeremy Stoltzfus
Well, there are a couple of different purposes when it comes to testing. First, we want to validate that the system works against how the customers run their businesses. So, if the out-of-the-box system doesn’t quite meet their needs, that’s usually identified in some of the early rounds of testing.
Second, testing is also a training opportunity, especially true when we’re moving to Infor CloudSuite applications. When a customer is migrating to that, the applications look and behave so very different than they did in Version 10.
And especially, and even if you’re coming obviously from another system, non-Infor system, into Infor. So testing also becomes an opportunity for the customers to practice executing the transactions that they’re used to executing in their previous system.
Chris Arey
Okay. The first idea you mentioned, making sure that the system is operating in a way that supports their business requirements. Is that making the distinction between technical and functional? Is that what you’re referencing?
Jeremy Stoltzfus
Yeah, it’s not just a technical function. It’s really more like: I’m in accounts payable and I need to enter and pay my invoices. And I do this because my business has certain special conditions, right?
As a result we kind of have to look at the system and we execute these test scripts to make sure that, hey, here’s how the system functions out of the box, and maybe it doesn’t initially capture all of the special needs that the client may have.
Chris Arey
Okay, and testing is an opportunity to make sure that it is working in that way, correct?
Jeremy Stoltzfus
Exactly. Yes.
Chris Arey
Okay. And then training, of course, you’ve been on the pod before to talk about the value of training. Testing is a great opportunity for training as well, is that right?
Jeremy Stoltzfus
Yeah, testing is just practice or rehearsal, right? You’re running through these scripts repeatedly, again and again. Repetition is the best way to learn something. Running through multiple testing rounds, running the same test script multiple times for different data scenarios, it just helps it become ingrained in our brains as far as the steps to take and how to become familiar with the system.
Chris Arey
Got it. Okay, so I’m hearing all these things, and I feel like I might know the answer to my next question, but you’ve worked on a lot of different projects. What separates a solid testing strategy from one that’s just kind of checking the box?
Jeremy Stoltzfus
Right, so a foundation of solid test scripts is vital because you want to make sure you’re capturing all aspects of the system, not just the things that you think about. Here’s kind of what I’ve done this past week, but what are the things that only happen once a quarter, once a month, once a year, those types of scenarios. And even it’s those little one-offs, okay, well, this time we’re adding a new type of data element. We haven’t captured that before.
So that foundational library of test scripts to cover all aspects of the system really becomes critical, right? It’s not just running each script and moving on, it’s also having multiples of that same test script. Everyone’s brain works a little bit different, right? I read a set of instructions and interpret those instructions one way, but you may read that same set of instructions and visually your mind goes somewhere else or it looks at a different part of the application.
Even though we’re reading the same instructions, you may interpret that slightly differently. Test scripts are hard to develop because you want to be as specific as possible but not overwhelm your tester with hundreds of steps to follow where it’s outlining very specific things.
Chris Arey
Do you have multiple versions of scripts so that like different people can run through them?
Jeremy Stoltzfus
Yeah, it’s really the same script that you want multiple people to test because one person is going to give different feedback than somebody else. Or maybe they are trying different scenarios in their own minds. They know the types of things that they are going to run into or that they need to test.
Chris Arey
So a good testing strategy then is accounting for all of the nuance that different people might experience when they’re going through an exercise, as well as thinking about those things that you do on a day to day, as well as once a quarter and so forth.
Jeremy Stoltzfus
Right.
Chris Arey
And we’re talking about testing in this context of ERP implementation, you know, installing your big system. But I feel like testing can also be a factor post-Go Live too, right?
Jeremy Stoltzfus
Absolutely. Good test scripts can be reused over and over to onboard new hires, train new staff, and even run regression testing after upgrading. A lot of times when we do implementations, those test scripts get turned into job aids or even user manuals so that the customer has that documented process of how to do certain things within the system.
Chris Arey
So a new hire joins the company. They’re supposed to do X, Y, and Z job duties. These scripts are in place for them to go through those activities and get familiar with the system. And it probably saves a lot of time, right?
Jeremy Stoltzfus
Yeah, absolutely. The repeatability and the clarity of direction on how you as a business are running that software just makes it a much more documented process.
Chris Arey
Okay, so in your more than 25 years of consulting here, I’m sure you’ve been exposed to a number of testing tools, methods, and whatnot. What makes a testing solution good in your eyes?
Jeremy Stoltzfus
Right. So that is a great question, because maintaining a library of test scripts is so much more than just having a folder full of documents labeled with certain names. You have to be able to quickly grab a handful of scripts based on a certain section, module, or function within the application and be able to assign them out.
You need to be able to easily update and maintain the scripts. As the application changes, how do my scripts need to change? For example, navigation changes. Cloud-based software changes all the time. And your path to get to a certain form or screen within the application may be different. So how do you quickly update your test scripts to account for those types of changes?
Chris Arey
Yeah. So Infor cloud suite, for example, they have the biannual cumulative updates, and those introduce all sorts of new features and functionality changes to their various applications. So when that happens, theoretically, if you have a bunch of test scripts, you would need to update them, correct?
Jeremy Stoltzfus
Right. Right.
Chris Arey
And is that time consuming? I feel like it probably is.
Jeremy Stoltzfus
It can be time consuming for sure, depending on the depth and the complexity of the changes. But knowing how to be able to quickly identify what needs to be changed and categorizing them becomes an important part of the process. We have a library of about 1,800 test scripts for Infor CloudSuite.
It’s one thing to say, well, I’ve got 1800 test scripts, but okay, maybe you’re in contract management, and there are only 30 or so of those test scripts that you’re responsible for. So being able to identify a group and categorize those test scripts becomes very important.
Being able to just, like I said, just assign some sort of ownership to those different test scripts, you know, import release as a new feature, or maybe it’s a enhancement, or maybe it’s a bug fix to a certain set of functionalities, right?
Maybe in reading the release notes, it’s clear that it’s going be an accounts payable function. Okay, let me pull up my accounts payable scripts. So as long as those are categorized in group, you can quickly identify what actually needs to be tested each time.
Chris Arey
Okay, so it sounds like both you and RPI as a whole have this categorization approach for maintaining scripts. Is that part of why our testing solution is a good one?
Jeremy Stoltzfus
Correct. Yeah, being able to identify and make that grouping. You know, there are a lot of dependencies in testing. It is important to be able to understand what’s dependent on one another and what scripts need to be executed in what order. All of those things really come into play.
Chris Arey
Okay. Do testing scripts sometimes relate to one another? Like if one script does this and there’s a change there, is it going to have a direct effect on another script that’s related to it?
Jeremy Stoltzfus
Yeah, absolutely it can. There’s downstream impact. If the end result doesn’t yield the same thing, then a downstream script in the next in sequence doesn’t get what it’s expecting as input.
Chris Arey
So that sounds like a lot of different moving parts. How are you keeping track of that?
Jeremy Stoltzfus
For an implementation project, we break testing down into three main testing phases, right? We’ve got unit tests, system integration tests, and user acceptance testing. And then, like I said, even after Go Live, it’s more about just identifying those scripts and, okay, I’ve got a new employee starting in finance, for example, right? So here’s the set of finance scripts that need to be executed for that person.
It’s really about knowing your key business processes and what’s important to you. We look at the different groupings, we look at the different test phases and post-Go Live, we hope that customers are looking at the release notes and being able to identify specific scripts. And we hope that we give them those tools to identify quickly the scripts that need to be updated.
You know, we talked about identifying how scripts tie together. You know, for example, a test script to submit a purchase requisition is a prerequisite for a test script to create a purchase order. So being able to identify and sequence those things in the correct order, and not only just sequencing them, but being able to track data elements along with it.
I put in a requisition, the requisition gets created, it’s number one, two, three, four, five. I want to tell somebody in purchasing, we’ll now go create the PO. Okay, well, what requisition am I using? You know, for me as the person testing the requisition, okay, we’ll enter in a value to say it’s requisition one, two, three, four, or five.
The next script down the line gets fed that information, right? So that’s the things that we try to capture as well while stringing together business processes.
Chris Arey
It just dawned on me here that like when you’re going through these testing scripts, you’re really testing two different things. One, that the user can complete this exercise and two, that the system is doing what it’s supposed to do, right? And those are happening simultaneously.
Jeremy Stoltzfus
Yeah, absolutely. So yeah, not only are we getting the users comfortable with the system and having them execute these test scripts, but we want to focus on the test or fail. We want to be able to track the status. We want to be able to make sure that all our scenarios been completed. So those metrics are another key feature and an important thing to track with your testing.
Like I mentioned, we have a library of like 1800 test scripts, right? Not every customer is going to use all 1800 of those because that’s across the full suite, right? I mentioned creating a requisition as a basic testing procedure. Well, there are multiple different types of requisitions that need to be created, so we may need to track 10 versions of that same script, or 10 instances of that same script, but with different data scenarios.
And so being able to track the metrics and say, Ok, well, this test script passed eight times, but why did if fail those two times? You know, it’s the exact same script, but the data elements may be different and cause different things downstream. And so being able to identify and call out those things is very important.
Chris Arey
Ooh, yeah. I love this scenario you’re talking about here, like eight out of 10 times. What happened the two out of the 10 times? And you really have to dig deep. It sounds like when you’re doing this type of testing, you have to figure out what broke down. So what is the approach for fixing something like that? I guess it depends on the specific situation, but do you have a process?
Jeremy Stoltzfus
Yeah, so obviously you’re tracking issues, right? Issue logs. But if your issue log can be directly linked to that specific instance of that test script, you can then go back and review, okay, well, I can see that this was a requisition that was put in for this item type. And that’s different than the other eight that passed, so let me look specifically into that specific detail, and that’s going to help me track down the issue that much quicker.
Chris Arey
Okay, got it, I like that. So you also talked about content libraries. I’m sure that’s a big value add for organizations who are going through testing cycles. How do those work?
Jeremy Stoltzfus
So when we start an implementation project, we start with the base scripts, right? We have the out of the box scripts that are generic, but robust enough to test all the different pieces, so we provide those as a starting point for customers.
Chris Arey
Okay. And ownership over those scripts. Is that something that’s shared between us, the implementation partner and the organization? Or what does that process look like?
Jeremy Stoltzfus
So when a client is learning a brand new system, we, RPI, an implementation partner, will take on that responsibility of ownership initially. But as users become more more comfortable with the system, we like to hand that over and really give them the power, right? Part of our project methodology is that transition of ownership as they become more familiar with the system.
Chris Arey
Okay. It’s like training wheels.
Jeremy Stoltzfus
Yeah, exactly. And you know, that just rolls into regression testing post-Go Live, right? Those scripts end up becoming theirs to own and maintain as job aids, as they adjust their processes. Like we said, software changes, right? They really must own those and keep them up to date. And that’s not always an easy task, but it is important.
Chris Arey
Okay. And we tell them that it’s like, look, like you’re live now, and you’ve gotten familiar with these scripts. You know Infor releases these changes twice a year, so you need to stay on top of the scripts or they’re going to break.
Jeremy Stoltzfus
Yeah, maybe not break so much as like, okay, well, these first three steps telling me how to get to the screen are wrong. If I can figure out how to get to the screen, usually the rest of the steps are okay.
Sometimes it’s the navigation, or sometimes there are new features or new fields that are added that weren’t required before, and now they are. There are always some nuances, so break is just maybe a little too strong of a word.
Chris Arey
Okay, noted for the future, thank you. So we’ve been talking about content libraries and metrics to see whether scripts are passing or failing and creating all these different scenarios. What kinds of tools have you been using to track these testing efforts?
Jeremy Stoltzfus
Right. I’ve seen the gamut since joining RPI. I think when I first started here, we were tracking most of it in an Excel document. It was usually the project manager that was responsible for keeping that Excel document up to date. That approach is great because it can be specific.
One person knows exactly how the data needs to be put into the Excel for all the formulas, the charts, the graphs, the metrics, and everything to come out the way that it needs to for reporting purposes.
Leadership always wants to know and understand where we are at with our testing efforts or people doing their job and things of that nature. It’s interesting. While using that method, we would often print physical copies of the scripts. I remember there was a project where one of our consultants was up till two in the morning the night before printing out scripts before packing them in a suitcase, jumping on an airplane and flying to the client side. Literally a suitcase full of printed out test scripts.
Chris Arey
That’s probably pretty heavy, huh? Heavy suitcase.
Jeremy Stoltzfus
Yeah, so that’s not an easy thing. And then yeah, it’s just a huge burden to maintain, and somebody’s filing, looking at the scripts as they’re being completed, then they’re updating it, right? It’s just very inefficient.
With the transition to more remote testing than we used to do, we’ve had to adopt more online tools that everybody has access to, whether that’s a Smartsheet, SharePoint, or another tool. Dashboards can be built off of that data and it gives the end user a little bit more responsibility to say, okay, well, I need you to go into here and I need to go into the Smartsheet and mark that as passed, make notes, or mark things as failed.
We can still kind of build those, but even that is troublesome at times, though Smartsheet’s a great tool. It’s kind of like an online spreadsheet, but it’s a little bit more than that. There’s a lot of project management capabilities in there, graphing and charting type of thing. But when you give somebody access to a Smartsheet because they need to update it, they can delete lines, they can resort the sheet, they can do all kinds of things.
I’ve seen everything with that tool and it is a little bit of a headache to try to deal with sometimes. So that’s another challenge. You can capture it little bit more automated from an end user, but it still has a different type of burden, I think, that’s associated with it.
We’ve tried a couple of package solutions out of the box, but a lot of them seem to be more focused more on the tracking of the completion status and other metrics and not so much the script content. For those may just be a Word document in there that’s been populated. But now I’ve got to go in and update all these Word documents because the navigation changed, right?
Chris Arey
Hmm. Yeah.
Jeremy Stoltzfus
So that’s, you know, we haven’t found a good package solution that kind of does it all. And I don’t know if we’re just doing a unique kind of testing, but in my opinion, the solutions that we’ve looked at just have not been robust and flexible enough to really handle the full workload. It doesn’t encapsulate all of the features, like I said, like integrated issue tracking along with that as well.
Chris Arey
Yeah.
Jeremy Stoltzfus
Okay, now I’m tracking my test statuses over here, my various scripts, but then over here, I have a completely separate issues list that I’m tracking things. And so that’s been a little bit challenging.
Chris Arey
A lot of different things being tracked in different places.
Jeremy Stoltzfus
Yeah, exactly. Exactly.
Chris Arey
It’s probably tough, but I’m sure it’s better than packing a suitcase full of physical documents while traveling.
Jeremy Stoltzfus
It is better than packing a suitcase for sure.
Chris Arey
But it seems like almost a stopgap for a solution that’s maybe more holistic and in one place.
Jeremy Stoltzfus
Right, so one of my primary roles on a project is to identify gaps and come up with solutions, right? So I go a customer who says, okay, this is my business process. The software works this way. How do we bridge the gap?
I’ve been in the IT field for about 30 years now, so by nature, I’m a problem solver. You put me and a couple of like-minded people together, and we basically decided to write our own solution.
Chris Arey
Ooh!
Jeremy Stoltzfus
Yeah, so we asked the question, like, how can we better manage not only the content, but the assignment, the grouping, the scheduling, the metrics, and the issue management of scripts all in one place?
So we’ve taken years of knowledge from multiple consultants and fed that into a system that is one complete package. It’s also very easily deployed and maintained for our projects and clients. We took all the good things from the various tools that we talked about and addressed all the pain points from those various tools and rolled them into one package that we call Dokimi.
Chris Arey
Ooh, Dokemi, I like that name. I don’t know what it is. What’s the story there? Why did you pick that name?
Jeremy Stoltzfus
So we were kind of brainstorming a name to call this rather than just our test tool. And I said, hey, on a whim, what’s their Greek word for test? And so dokimi was the result. It means try, test, trial, proof, or rehearsal. And when I think about all of those words, they all apply, right?
Obviously, we try and test. In the early test phases of a project, we’re really just testing out the system. We’re trying it to see how it works. Then as the project progresses, we build solutions, and we do a trial run of those things that we just built. We’re proving out that the system can do what the client needs it to do. And then also we discuss progression testing and ongoing training.
Chris Arey
Yeah.
Jeremy Stoltzfus
We’re rehearsing those scripts over and over again to ensure system stability.
Chris Arey
I love that you’ve shared the evolution of testing both in your experience and here at RPI and how through that process, through all these different things that you’ve done, you’ve arrived at this tool. The word dokimi meaning trial, test, proof, rehearsal, all of those things together, it’s very cool, intentional, and direct.
I love the name because it’s so clear. Knowing the definition, I have no question, no doubt in my mind what it is that this tool does. So very, very cool. Thank you for sharing that. And this is something you’re using right now, yeah?
Jeremy Stoltzfus
Yeah. Yes, we have, we’ve been using the tool for almost a little over a year now actually. We’ve been using it with multiple clients. We’ve gone through a couple of the early stages and gotten really good feedback from our customers. We’ve made some design adjustments and tweaks and modifications to it based on feedback that we’ve gotten. We think that have a really good solution.
Chris Arey
As I say, at the outset here, you were trying to solve a gap and you had all these different tools you were using to manage testing. Do you feel that you’ve been successful?
Jeremy Stoltzfus
Yeah, think we’ve bridged all the gaps that we’ve had with all of our various other tools.
Chris Arey
It’s very exciting that we as consultants are also developing our own in-house software based on the problems that we’re seeing and experiencing all the time. So very inspirational, Jeremy. And you’re the mastermind behind this.
Jeremy Stoltzfus
It’s something that’s been kind in my mind for the past couple of years. I’d been toying around with a few different ideas, and then a couple of coworkers kind of sparked the progression of it, you know, over the past two years or so.
Chris Arey
Very cool. I can’t wait to see it. We talked about how testing has evolved over time and when it’s beneficial to learn in-person versus remotely, based on what the scenario calls for. It really is fantastic to hear that Dokimi is providing a holistic solution to testing, no matter if it’s for a full cloud migration or an application on its own.
Love to hear that. Thank you for sharing that with me. We are getting close to time this afternoon, but as you know, before we wrap up, I always like to ask my guests if you could share one takeaway for today’s audience regarding testing and Dokimi, what would it be? I’d love to hear what you could share.
Jeremy Stoltzfus
Sure. Testing strategies should actually be a strategy. It’s not just an afterthought of, let’s just test a few things in this system. You need to identify specific scenarios and be able to repeat that process easily.
Chris Arey
Testing now in this environment is gonna pay off down the road, right? That’s another thing. I don’t think we talked about this too much, but when testing in a dev environment, you get to go through trial and error right there and experience if something’s not working before it really matters. So it’s important to take advantage of that.
Jeremy Stoltzfus
Absolutely.
Chris Arey
Okay. Well, thank you again for joining me and sharing these exciting developments with Dokimi and RPI’s testing solutions. For those of you listening in, if you want to find out more about our solution or how it can help your organization, we would love to chat with you.
Or, if you have questions about testing in general, we would also love to hear from you. You can contact us by emailing podcast@rpic.com. Again, that’s podcast@rpic.com. Would love to have a conversation with you about testing.
Jeremy, thank you again. As always, it’s always nice chatting with you. This has been RPI Tech Connect. I’m your host, Chris Arey, and I’m going to sign off. Take care.
Jeremy Stoltzfus
Thanks everyone.
