Contest
Dear Participants,
Thank you again for your submissions to the Fake Life Recognition Contest.
We launched this competition with the hope of gathering insights about the question: "Can a computer find a difference between Life and Non Life?". There are many challenges even in finding an appropriate way to ask the question. What kind of data should we use? How do we choose datasets if we do not have a working definition of "life"?
The question of Life vs Non Life is older than the scientific field of Artificial Life, and we could spend many more decades just thinking about the question. But we decided to take the risk and try to learn by doing, even if it meant getting everything wrong at first and getting better by incremental criticism.
The results of the contest are better than anything I could have imagined. Despite the low information content of the data (Unlabeled, snippets of 2D trajectories with distorted time scales), 2 teams got 7 out of 10 points for 5 of their submissions: Johnowhitaker (2 submission) and Christin (3 submissions).
The final phase of the contest, where we tested the high ranking submissions on 2 unreleased datasets, revealed a winner with a perfect score: Team Christin (Christin Puthur and Tom Froese of the Embodied Cognitive Science Unit at OIST), congratulations! We will contact you privately so you can collect your $1000 cash prize. Johnowhitaker was really close.
For the final phase, since we had more samples on dataset 10 (seal diving data) and more variety, we choose 5 random csv files out of 40, without replacement, and tested the algorithms of both competitors. We repeated this process 100 times and took the average score of each algorithm. (The drone data was treated the same way as in the 1st phase).
Team Christin, 3 submissions with 7 points each
Submission 1 scores: Drone data 1.0, Seal Data 0.898
Submission 2 scores: Drone data 0.0, Seal Data 1.000
Submission 3 scores: Drone data 1.0, Seal Data 0.926
Team Johnowhitaker, 2 submission with 7 points each
Submission 1 scores: Drone data 1.0, Seal Data 0.171
Submission 2 scores: Drone data 0.0, Seal Data 0.703
Find more details in the Github repository. We can now make public the sources of all datasets, as well as the scripts used to clean and normalise the data. There were spiders, robot arms, artificial chemistry, sharks, and more!
We are very glad to share not only the data, but the winning hypothesis that underlies Christin and Tom's classification algorithm: https://github.com/LanaSina/FLR_contest We would also like to share everyone's submissions on Github, including the submissions that did not win. Please contact us if you are OK with sharing your hard work with the world.
We would like to profusely thank our sponsor, Cross Compass, as well as all the scientists who generously contributed their data or their time: Hiroki Sayama and Norihiro Maruyama for contributing data, Elhadji Amadou, Oury Diallo and Chitora Shindo for helping to deploy the contest, and Kaan Akinci for choosing the final datasets, coding and running the final scoring algorithm.
Lana, Olaf, and Kaan.