Visit

NEW YORK — The Common Visual Data Foundation (CVDF) today announced the COCO 2017 Stuff Segmentation Challenge, designed to push for innovation in semantic segmentation of stuff classes. CVDF, in cooperation with a research team at the University of Edinburgh, collaborated with Mighty AI to annotate 55,000 images across 91 stuff classes for this challenge. The submission deadline is October 8th. For more information and to enter the competition, visit the Stuff Segmentation Challenge page.

“At CVDF, we aim to cultivate innovation and advancements in the computer vision and machine learning communities. Not only do these competitions get new people involved in driving the future of the field, the datasets become important free resources for our research community,” said Serge Belongie, President of the Common Visual Data Foundation and a professor at Cornell Tech.

Stuff classes are background materials that are defined by homogeneous or repetitive patterns of fine-scale properties, but have no specific or distinctive spatial extent or shape — such as grass, a wall, or sky. Stuff covers about 66% of the pixels in COCO (Common Objects in Context), impacting important aspects of an image, including scene type; which thing classes are likely to be present and their location; as well as geometric properties of the scene.

“Stuff classes have received relatively little attention of the research community. Nevertheless, stuff is important for full scene understanding and we hope that this competition spurs innovations towards this goal,” said Holger Caesar, a PhD candidate at the University of Edinburgh whose research led to the creation of the dataset.

Mighty AI and its community of over 300,000 specialized annotators segmented, categorized, labeled, and validated the data within the COCO images.

“Large, publicly available datasets like COCO are a big reason advances in computer vision are happening at lightning speed,” said Daryn Nakhuda, CEO and co-founder of Mighty AI. “We are thrilled to partner with CVDF and contribute high-quality labeled data to further accelerate research and training in this space.”

In 2014, the CVDF released the COCO dataset which annotated over 200,000 images with 80 thing classes. This large-scale dataset — and the challenges organized around it — were crucial to enable the breakthroughs in deep learning that are now ubiquitous in autonomous vehicles, care robots, and other computer vision applications. This competition enhances the dataset with 91 stuff classes, resulting in an even richer dataset with new research opportunities.

The COCO Stuff Segmentation Challenge is made possible through sponsorship from Microsoft, Facebook, Mighty AI, and Google Cloud Platform.

For more information, visit http://cocodataset.org.

About The Common Visual Data Foundation

The Common Visual Data Foundation is a 501(c)(3) non-profit organization with a mission to enable open community-driven research in computer vision through the creation of academic datasets and corresponding competitions. The availability of high-quality labeled data is essential for enabling and evaluating state-of-the-art academic research. The competitions sponsored by the foundation, including the COCO Detection, Keypoint, and Stuff challenges help the community monitor its progress and focus research efforts on core computer vision problems. In addition to the datasets and competitions hosted by the foundation, the tools used in their creation are open-sourced to aid data related research across the community.

About Mighty AI

Founded in 2014, Mighty AI delivers training data to companies that build computer vision models for autonomous vehicles. Our platform combines guaranteed accuracy with scale and expertise, thanks to our full stack of annotation software, consulting and managed services, proprietary machine learning, and global community of pre-qualified annotators. Visit www.mty.ai to learn more, and follow us at @mighty_ai.