CodaLab.org is a competition holding platform (also). You can also build your own server with Open source published on github.
A memorandum of know-how for holding competitions at CodaLab.org.
Click here for documentation:
For reference, Iris example challenge. Download the bundle from here (click "DOWNLOAD THE BUNDLE OF THE CHALLENGE" on the "Learn the Details" tab) and rewrite the yaml file to create a new one. Even if you write it appropriately, you can modify it later on the web.
Bundle is zipped and uploaded, but not if the folder is compressed. If you do not make only the file appear when you unzip the zip, an error will occur.
Iris example challenge is a code submission competition, so it's hard to tell what's going on. The following is an outline.
submission
--The user submits the zip. Zipped version of the following files without a directory
metadata specifies code execution command.
metadata
command: python $program/cat.py $input $output
description: Compute scores for the competition
File to be submitted together. Anything is OK.
submission.txt
0
1
0
....
The file to be executed. Read the file to be submitted together from run_dir, execute it, and Write the resulting file to output_dir (which will be called later by the evaluator)
cat.py
#!/usr/bin/env python
import os
from sys import argv
from shutil import copyfile
if __name__=="__main__":
if len(argv)==1:
input_dir = os.path.join('..', 'input')
output_dir = os.path.join('..', 'output')
else:
input_dir = argv[1]
output_dir = argv[2]
run_dir = os.path.abspath(".")
copyfile(os.path.join(run_dir, 'program', 'submission.txt'),
os.path.join(output_dir, 'submission.txt'))
In the above sample, the submitted file is simply copied to output_dir.
In the case of a code submission competition, the script and the trained model can be submitted together. In that case, it is necessary to read the test data (private) in input_dir.
--The organizer uploads the zip as a scoring program. Zipped version of the following files without a directory
metadata specifies code execution command.
metadata
command: python $program/evaluate.py $input $output
description: Compute scores for the competition
The file to be executed.
evaluate.py
#!/usr/bin/env python
import os
from sys import argv
import numpy as np
if __name__=="__main__":
if len(argv)==1:
input_dir = os.path.join('..', 'input')
output_dir = os.path.join('..', 'output')
else:
input_dir = argv[1]
output_dir = argv[2]
y_submit_file = os.path.join(input_dir, 'res', 'submission.txt')
y_ref_file = os.path.join(input_dir, 'ref', 'test_labels.csv')
load_y_ref = np.loadtxt(y_ref_file)
load_y_submission = np.loadtxt(y_submit_file)
score = np.abs(load_y_ref - load_y_submission).sum() / float(load_y_ref.size)
print("score: %.2f\n" % score)
score_file = open(os.path.join(output_dir, 'scores.txt'), 'w')
score_file.write("score: %.2f\n" % score)
score_file.close()
The operation of the above sample.
--First, the user's code is executed (prediction step), where the result is written to output_dir (as described above). --The result is copied to input_dir / res of evaluation (scoring step) (this is confusing) --True labels test_labels.csv have been uploaded to input_dir / ref --Compare it and save the result in output_dir as "scores.txt" (fixed file name)
scores.txt
score: 0.98
This result file is displayed on the leader board. The keyword "score:" is specified in the yaml file (or in the web GUI).
Currently, python and module libraries are older versions (python is 2), so code that uses the latest modules will fail.
python module 2017/02/14
Python version: 2.7.10 |Anaconda 2.4.0 (64-bit)| (default, Oct 19 2015, 18:04:42)
numpy==1.10.1
scikit-learn==0.16.1
scipy==0.16.0
I don't know if it can be used for anything other than python. Since the server seems to be windows (according to the documentation), it is also possible to upload a compiled binary file. If you want to use the latest python module, other scripts are OK, etc., it is better to build your own server.
Just check "Results Scoring Only" in the web GUI.
Since the user's submission is saved directly in the evaluation input_dir / res, the evaluation can be done simply by reading it, reading the true value of input_dir / ref, evaluating it, and writing it to output_dir / scores.txt.
Note: Even if there is only one submission file, you cannot submit it unless you zip it into a zip file.
Recommended Posts