After a period of intermittent testing, the DCS_FunTester distributed Stress framework update (I) was completed and several updates were added.
Added support for option 3
In the distributed Performance Testing Framework Use Case Scenarios (iii) and Docker-based Distributed Performance Testing Framework Function Verification (III), I mentioned scheme 3: Groovy script-based test cases. This update will support the execution of Groovy test cases. There is currently no filtering for script content except for access validation.
Here’s how to implement the master node:
@Override
int runScript(GroovyScript script) {
def mark = SourceCode.getMark()
def num = script.getMark()
def hosts = NodeData.getRunHost(num)
try {
hosts.each {
script.setMark(mark)
def re = MasterManager.runRequest(it, script)
if(! re) FailException.fail() NodeData.addTask(it, mark) } }catch (FailException e) {
hosts.each { f -> MasterManager.stop(f) }
FailException.fail("Multiple node execution failed!")
}
mark
}
Copy the code
Here is how to implement the slave node:
@Override
public void runScript(GroovyScript script) {
ExecuteGroovy.executeScript(script.getScript());
}
Copy the code
There is no value passed here, but there is a parameter params that can be used later for scripting parameter configuration.
Adding a registration mechanism
After adding the master node, there is no direct access to the slave node.
I wrote a simple implementation of the registration mechanism myself and put it in a class.
package com.funtester.master.common.basedata;
import com.funtester.base.bean.PerformanceResultBean;
import com.funtester.base.exception.FailException;
import com.funtester.frame.SourceCode;
import com.funtester.master.common.bean.manager.RunInfoBean;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.ConcurrentHashMap;
public class NodeData {
/** * Node status */
public static ConcurrentHashMap<String, Boolean> status = new ConcurrentHashMap<>();
/** * Running information about the node. Obtain the information from progress
public static ConcurrentHashMap<String, String> runInfos = new ConcurrentHashMap<>();
/** * Node run result */
public static ConcurrentHashMap<Integer, List<PerformanceResultBean>> results = new ConcurrentHashMap<>();
/**
* 节点更新时间
*/
public static ConcurrentHashMap<String, Integer> time = new ConcurrentHashMap<>();
/** * Id of the task running on the node */
public static ConcurrentHashMap<String, Integer> tasks = new ConcurrentHashMap<>();
public static void register(String host, boolean s) {
synchronized(status) { status.put(host, s); mark(host); }}/** * Available nodes@return* /
public static List<String> available(a) {
synchronized (status) {
List<String> availables = new ArrayList<>();
status.forEach((k, v) -> {
if (v) availables.add(k);
});
returnavailables; }}/** * Mark the node time **@param host
*/
private static void mark(String host) {
time.put(host, SourceCode.getMark());
}
/** * Check, delete expired nodes and expired data, and provide scheduled task execution */
public static void check(a) {
int timeStamp = SourceCode.getMark();
List<String> hkeys = new ArrayList<>();
synchronized (status) {
time.forEach((k, v) -> {
if (timeStamp - v > 12) { hkeys.add(k); }}); hkeys.forEach(f -> status.remove(f)); }synchronized (runInfos) {
hkeys.forEach(f -> runInfos.remove(f));
}
synchronized (tasks) {
hkeys.forEach(f -> tasks.remove(f));
tasks.forEach((k, v) -> {
if (timeStamp - v > 60 * 30) tasks.put(k, 0);
});
}
synchronized (results) {
List<Integer> tkeys = new ArrayList<>();
results.forEach((k, v) -> {
if (k - timeStamp > 3 _3600) { tkeys.add(k); }}); tkeys.forEach(f -> results.remove(f)); }}/** * Add runtime information **@param bean
*/
public static void addRunInfo(RunInfoBean bean) {
synchronized(runInfos) { runInfos.put(bean.getHost(), bean.getRuninfo()); }}/** * Get the described use case task running information **@paramDesc Task description *@return* /
public static List<String> getRunInfo(String desc) {
synchronized (runInfos) {
ArrayList<String> infos = new ArrayList<>();
runInfos.forEach((k, v) -> {
if(v.contains(desc)) { infos.add(v); }});returninfos; }}/** * Add runtime information **@param bean
*/
public static void addResult(int mark, PerformanceResultBean bean) {
synchronized (results) {
results.computeIfAbsent(mark, f -> newArrayList<PerformanceResultBean>()); results.get(mark).add(bean); }}/** * Add node run task ID *@param host
* @param mark
*/
public static void addTask(String host, Integer mark) {
synchronized (tasks) {
if(status.get(host) ! =null && status.get(host) == false) { tasks.put(host, mark); }}}public static List<String> getRunHost(int num) {
synchronized (status) {
List<String> available = available();
if (num < 1 || num > available.size())
FailException.fail("Not enough nodes to perform the task.");
List<String> nods = new ArrayList<>();
for (int i = 0; i < num; i++) {
String random = SourceCode.random(available);
status.put(random, false);
nods.add(random);
}
returnnods; }}}Copy the code
This is a bit complicated, but future plans include Redis or other mature components. Originally, I wanted to encapsulate the node information into an object, but later I thought it would be more difficult to deal with it separately.
The access to the slave node was cancelled
The master node uniformly assigns tasks to run the use case, naturally revoking access to the slave node, but there are still some exposed interfaces, and the table names are not in the Swagger document. In fact, refresh the master node information and re-register the node are reserved for use when the child node fails.
The service layer
The previous function is written entirely as a static method that extracts the service interface. The main method is as follows:
package com.funtester.master.service
import com.funtester.slave.common.bean.run.GroovyScript
import com.funtester.slave.common.bean.run.HttpRequest
import com.funtester.slave.common.bean.run.HttpRequests
import com.funtester.slave.common.bean.run.LocalMethod
import com.funtester.slave.common.bean.run.ManyRequest
interface IRunService {
public int runRequest(HttpRequest request)
public int runRequests(HttpRequests request)
public int runMethod(LocalMethod method)
public int runScript(GroovyScript script)
}
Copy the code
Each object has a mark attribute, which is the number of nodes to execute for the master node and the mark to execute the task for the slave node.
Updating synchronization Information
Here are some ideas:
- Start the
master
node - Start the
slave
Node, will first requestmaster
(configuration or interface Settings), get the localIP
- then
slave
A node uses scheduled tasks to synchronize its status tomaster
Node.
Not using the Socket interface, always feel troublesome.
Have Fun ~ Tester!
FunTesterTest framework and distributed test frameworkDCS_FunTesterOfficial account, welcome to follow!
- A preliminary study of the FunTester test framework architecture diagram
- The ultimate showdown between K6, Gatling and FunTester!
- Single-player 120,000 QPS — FunTester revenge
- FunTester’s past and present life
- Automate testing in production environment
- Tips for writing test cases
- 7 skills to Automate tests
- Iot testing
- Why do tests miss bugs
- Selenium Automation Best Practice Tips (1)
- Selenium Automation Best Practice Tips (Middle)
- Selenium Automation Best Practice Tips (Part 2)
- Asynchronous authentication of Socket interfaces
- After Selenium 4, no longer meet the API
Click to read for an original collection of FunTester history