There’s nothing more frustrating for a developer than constantly rebuilding things from scratch. One of the core principles of object-oriented design is the ability to create an object or a reference point for each job, so you never have to repeat yourself.
Despite this core principle, when it comes to emulation, developers often find themselves repeating the same process over and over again.
But why? When developers write application code, they often communicate with the same external API and make the same call to the same service in different ways. The problem with traditional mocks is that they are written at the code level, and they are specifically designed to work with the functionality under development. Therefore, a new mock must be created each time you need to exercise this functionality.
When using traditional mock frameworks, it is difficult to share mocks that have been created, not only because it is possible not to know where they exist in the code base, but also because it is difficult to understand which requirement a particular mock is bound to. So what ends up happening is that individual team members often create the same mock as the person sitting next to them. It’s a waste of developer energy and time.
Where is my mock?
Once a developer has created a mock, collaboration becomes challenging. Since no magic dashboard exists, you can post notifications about simulations that have been created to keep the team informed.
I recently had a client in a healthcare organization that was using simulation as a common development practice, and they had a service provider that was always offline, making it a common target for simulation. Therefore, each developer built a mock interface for it in their own code base. They are all slightly different, but they serve the same purpose. When I spoke to these developers, I found about 20 of the same mocks. It even surprised them. When they were asked about duplication, their response was composed, not entirely unexpected. “We are too busy to communicate.”
Sound familiar? (I wish I had an actual statistic here to make you feel better)
But simulation is necessary, as any developer or tester will explain, because you need to be able to decouple yourself from the rest of the world as you develop. Emulation is a way to surround your application with a defensible environment — but this solution has its inherent challenges, including:
- Rebuilding each mock from scratch is tedious and a waste of time
- Trying to discover existing simulations is difficult
- Mocks exist for no purpose — they are not tied to a specific API and cannot be reused
- We need to cooperate, but we are too busy communicating to care
Enter: Service virtualization. With this testing practice, you can simplify the simulation process and create a reusable virtual service library that shares core functionality. So you can stop creating virtual services over and over again.
Using Service Virtualization
Let’s look at an example. For example, there is an existing service that accepts an incoming account number, provides that person with identity information and returns a response, and a new virtual service needs to be developed in which financial details are returned based on the account number.
With service virtualization, much of the content of an existing service can be leveraged when creating a new virtual service. The only thing that separates the two services is schema and data. And as organizations build more virtual services, their library of artifacts that can be reused grows larger. This solves the initial challenge of having to create the same virtual service repeatedly.
Sharing a Virtual Service
Unlike mocks, virtual services are highly shareable and internal modules can be reused. Virtual services or PVA files can be stored as XML and easily checked into source control. If the service simulates a particular function of a particular API, you can search for artifacts in source control, or more easily on a shared virtualization server. As the use of service virtualization grows, teams can leverage existing server sharing capabilities to connect their desktops directly to the server to search for artifacts they need, pull them directly to their desktops, and start using them immediately. This solves the problem of finding virtual services that have been created and accessing them immediately.
Bundling virtual services
Parasoft Virtualize also provides a market for private and public artifacts built from common virtualization use cases. This allows you to quickly start and build an internal knowledge base across the organization to simplify the creation of future virtual services. When you start leveraging a virtual service, you can easily associate that virtual service with its initial API naming convention or through descriptions or tags.
Your development partners can then search directly in a Web browser for any virtual assets created for the API they want to emulate and see exactly what virtual assets have been created and deployed to their desktop immediately:
This solves the challenge of tying virtual services to their specific apis and requirements.
Collaborate with virtual services
Finally, given all of the above solutions, your team can establish a sustainable workflow that gives developers and testers a choice when they realize they need to simulate. Instead of spending time traveling back and forth, they can query the Parasoft ecosystem for mocks that fit their specific needs, and if one exists, they can get it immediately. If not, they can create a virtual service that the team can reuse and anyone who needs it in the future can find. This solves the problem of collaboration.
So what to do now?
To start collaborating with your virtual infrastructure, you can take the first step by starting a download — assets can be checked to source control, promoted to a shared team server, and uploaded to your team’s private market. Happy virtualization to you!