Allocation of resources to testing

what is this

You are a software tester. The software is made up of four modules. The tests are also divided into five categories: A, B, C, D and E.

Now you have to decide which category of tests to do and at what rate.

policy

From past experience, assume that the module-test correlations are given in Table A. At this time, when each test is performed at a ratio of $ p $, the probability of finding a failure in module 1 shall be calculated at $ A_1 \ cdot p $.

Data creation

Make Table A with random numbers.

python3


import numpy as np, pandas as pd
from pulp import *
from ortoolpy import addvar, addvars
np.random.seed(4)
nm, nt = 4, 5 #Number of modules, number of tests
A = pd.DataFrame(np.maximum(np.random.normal
     (-0.05,0.2,(nt,nm)), np.zeros((nt,nm))),
     columns=['module%d'%(i+1) for i in range(nm)],
     index=['test%s'%chr(i+65) for i in range(nt)])
print(A)
Discovery probability Module 1 Module 2 Module 3 Module 4
Test A 0.000000 0.049990 0.000000 0.088720
Test B 0.000000 0.000000 0.000000 0.069715
Test C 0.016450 0.000000 0.073734 0.000000
Test D 0.035014 0.016451 0.000000 0.020199
Test E 0.000000 0.259396 0.094668 0.000000

Calculation part 1

Let's calculate in Python to maximize the total sum.

python3


m = LpProblem(sense=LpMaximize) #Mathematical model
x = addvars(nt) #variable(Test percentage)
m += lpDot(A.sum(1),x) #Objective function
m += lpSum(x) == 1
m.solve()
r = np.vectorize(value)(x)
print('%s sum%.4f minimum%.4f'%(LpStatus[m.status],
    r.dot(A.values).sum(), r.dot(A.values).min()))
for i,v in zip(A.index,r):
    print('%Work load of s%.4f'%(i,v))
>>>
Optimal sum 0.3541 Minimum 0.0000
Test A workload 0.0000
Test B workload 0.0000
Test C workload 0.0000
Test D workload 0.0000
Test E workload 1.0000

The result is that only the most efficient test E is done.

Consideration

Test E cannot find any failures in Module 1 and Module 2. Quality assurance is considered to guarantee the quality of a certain minimum line.

image

In "Calculation # 1", we test to improve the average quality like the distribution of red, but we still have the possibility of low quality. As "Calculation # 2", consider a test that avoids low quality rather than increasing the average quality as in the blue distribution.

Calculation part 2

Let's calculate to maximize the minimum quality per module.

python3


m = LpProblem(sense=LpMaximize) #Mathematical model
x = addvars(nt) #variable(Test percentage)
y = addvar() #variable(Lowest line)
m += y + lpDot(A.sum(1),x)*0.01 #Objective function
m += lpSum(x) == 1
for j in range(nm):
    m += y <= lpDot(A.ix[:,j],x)
m.solve()
r = np.vectorize(value)(x)
print('%s sum%.4f minimum%.4f'%(LpStatus[m.status],
    r.dot(A.values).sum(), r.dot(A.values).min()))
for i,v in zip(A.index,r):
    print('%Work load of s%.4f'%(i,v))
>>>
Optimal sum 0.0948 Minimum 0.0237
Test A workload 0.1434
Test B workload 0.0000
Test C workload 0.2838
Test D workload 0.5435
Test E workload 0.0293

Compared to Part 1, the sum was lower, but the lowest was higher.

This idea is the same as "Solving game theory with optimal combination".

that's all

Recommended Posts

Allocation of resources to testing
[Introduction to cx_Oracle] Overview of cx_Oracle
Summary of how to use pandas.DataFrame.loc
Etosetra related to read_csv of Pandas
Summary of how to use pyenv-virtualenv
Recursion challenge to Tower of Hanoi
Easy to see difference of json
Summary of how to use csvkit
Supplement to the explanation of vscode