Visit

It was a big year at Cornell Tech.

Our students, faculty and staff built innovative technologies and products. Construction of our  Roosevelt Island campus made significant progress in preparation for classes next fall . And initiatives like K-12 education and Women in Technology and Entrepreneurship in New York (WiTNY) continued to make a big impact in the community.

Take a look back at 2016 with some of our favorite stories and get excited for more great things to come in 2017.

How Shortened URLs Can Be Used to Spy on People
Research done by Professor Vitaly Shmatikov revealed how shortened URLs can be easily hacked and can put sometimes-sensitive information at risk.

""

Ron Brachman Joins the Jacobs Technion-Cornell Institute at Cornell Tech as the New Director
The Jacobs Technion-Cornell Institute announced Ron Brachman as the new director of the institute. Brachman, an internationally recognized authority on artificial intelligence, comes to the Jacobs Institute from Yahoo where he was the Chief Scientist and Head of Yahoo Labs.

""

2016 Cornell Tech Startup Award Winners Announced
In the second Cornell Tech Startup Awards, four startups developed in Startup Studio received $80,000 in pre-seed funding and $20,000 worth of co-working space.

""

OneBook to Rule Them All: A Cornell Tech Startup Brings Mixed Reality to Reading
OneBook was built by two Connective Media students at the Jacobs Technion-Cornell Institute using mixed reality to bring digital content to physical surfaces.

""

Making Global Connections in Healthcare with Connective Media
Two Connective Media students conducted research with Assistant Professor Nicola Dell in Lesotho, Africa to develop a digital system for tracking biological samples used in diagnostic services in rural areas of the country.

""

Runway Startup Postdoc Assaf Glazer Has Reinvented the Baby Monitor

Nanit is a baby monitor like no other. Using computer vision and machine learning, Nanit provides parents with easy-to-understand insights into their child’s sleep patterns. This year, Nanit raised $6.6 million in funding.

""

This Retainer Doesn’t Change Teeth — It Changes Lives

A team of Connective Media students at the Jacobs Technion-Cornell Institute developed a retainer-like device that allows the mobility impaired to manipulate connected devices using their tongue.

""

The New York Times: The Innovation Campus — Building Better Ideas
In an article in the New York Times, Cornell Tech’s future home on Roosevelt Island was featured for leading the charge on innovation in college campuses.

""

First ResearchStack App, MoleMapper, Launches on Android

ResearchStack — the open source framework developed by Associate Dean and Professor Deborah Estrin — launched it’s first app, MoleMapper, bringing mobile medical research to Android devices.

""

Where Computer Science and Community Health Meet
Sonia Sen, Technion-Cornell Dual Masters Degrees in Health Tech ’17, is working to streamline healthcare in Harlem.

""

Cornell Tech Alum Builds ‘Dreamteam’ to Create All-Star Tech Teams
What started as a Startup Studio project ended up being a valuable tool for Cornell Tech to build balanced and passionate teams.

""

Hillary for America CTO Stephanie Hannon Discusses Her Life in Tech
Earlier this year, Cornell Tech, in partnership with CUNY, launched the Women in Technology and Entrepreneurship in New York (WiTNY) initiative to empower young women to pursue careers in technology. They hit the ground running, developing programs, awarding scholarships (41 total), and even bringing in Stephanie Hannon, accomplished engineer and the first female CTO of a major party’s presidential campaign.

""


In the last Cornell Tech @ Bloomberg event of the fall semester, co-founder and co-CEO of Warby Parker Neil Blumenthal spoke to a packed house about building a strong brand and how they’re growing the company without losing focus on their mission, Tech at Bloomberg reports:

Warby Parker’s Neil Blumenthal envisions his hip eyewear maker becoming the world’s biggest optical company.

“We are not building Warby Parker and our brand to scale and flip it, but to last and be around for a hundred years—and hopefully have a big impact,” said Blumenthal, co-founder and co-CEO, at the Cornell Tech @ Bloomberg speaker series.

After guiding the disruptive industry innovator to a $1.2 billion valuation through five rounds of funding, Blumenthal believes it’s vital to continue growing Warby Parker, but without abandoning the brand’s core values: great products, attainable prices, strong customer experiences and social entrepreneurship. The key to the company’s continuing success, he asserts, is keeping intact many of the company’s “traditions and rituals,” while also accepting change.

“Change is inevitable and happening faster than ever before,” Blumenthal told the audience in Bloomberg’s headquarters in midtown Manhattan. “We have to welcome and embrace it.”

Read the full article on Tech at Bloomberg.


Popular notions of creativity are often bound up in romantic ideas of a creative process that is special, hard to define, and maybe even magical.

Machines, on the other hand, are usually considered logical systems that function in a rule-bound manner. The idea that machines could be creative seems far-fetched. But is such a view justified?

Can Machines Make Art?

In the 1970s, the artist Harold Cohen tried to understand his own creativity by constructing a computer program, AARON, which could create works of art similar to his own. AARON, by most accounts, succeeded: the drawings and artwork it produced were eventually exhibited in galleries. This suggests that technology is capable of replicating human creativity—but can a machine be autonomously creative?

Michael Wilber, a PhD candidate at the SE(3) Computer Vision Group at Cornell Tech, is doubtful. He argues that, in general, machines can only do what they’ve been programmed to do—we can program a machine to create a picture, and in that sense it is creative; but we still have to tell it how to create by giving it constraints.

In a similar vein, technology already exists capable of generating images based on previous “experiences”—as an example, Wilber cites Deep Convolutional Generative Adversarial Networks, a kind of neural network that uses a system of modeling and sampling to make pictures similar to images it has seen before. But creating something fundamentally new is a different challenge. There are systems like Prisma, Wilber points out, that apply different styles to an artwork, but the result is not a new, original work of art.

“It’s easy to make a random image,” Wilber said. “But it’s much harder to get a random image that’s aesthetically pleasing.”

Wilber thinks that what matters is not teaching machines to be creative, but rather creating new tools with which humans can express creativity—in their own human way. Google’s TiltBrush, for example, creates a new genre of artwork for virtual reality by allowing users to paint in 3D space.

Fields such as art, music and literature are natural starting points when testing for machine creativity. But another way of addressing the question asks that we dig deeper into our ideas of what creativity actually is.

What is Creativity?

In The Creative Mind: Myths and Mechanisms, Margaret A. Boden defines creativity as “the ability to come up with ideas or artifacts that are new, surprising, and valuable.” She thinks this ability is a characteristic feature of human intelligence in general.

Creativity, writes Boden, is “grounded in everyday abilities such as conceptual thinking, perception, memory, and reflective self-criticism. So it isn’t confined to a tiny elite: every one of us is creative, to a degree.”

This view chimes with that of Michael Wheeler, Professor of Philosophy at the University of Stirling (UK). According to Wheeler, creativity can be thought of as “involving many of the ordinary psychological processes that we use in other contexts, like pattern matching and extending an idea thorough generalization.”

But understanding creativity in the context of machines, Wheeler argues, requires us to ditch the view that machines are “clunky step-by-step logical reasoning systems.”

Consider, for example, the algorithms that are capable of playing and beating even the best human opponents at strategy-based games like Chess and Go.

We tend to think of these as logical games in which players look ahead and anticipate a vast number of different moves. That’s wrong, says Wheeler. In these games, pattern-recognition, more than brute calculation, determines success. Go, for instance, involves such an explosion of possible paths that it is not possible to play by laboriously mapping and calculating many moves ahead. Instead, creative tactics, such as pattern recognition, are employed.

“That there is a deep-mind algorithm that can beat world Go experts ought to make us think that machines can do things that are not reducible to that simple, stale, algorithmic model,” Wheeler said.

Humans and Machines Creating Together

Another approach pits humans and machines together in creative collaboration.

Wheeler notes the creative possibilities within the field of genetic algorithms—a process that mimics natural selection to design new systems from scratch.

In evolution, mutations are naturally “selected” because they survive and reproduce. But what would happen if artists intervened in the selection process?

Using genetic algorithms, artists can modify a population of random control systems by selecting for mutations that interest them—those that are aesthetically interesting, say. These traits can then be adjusted and recombined.

There are two components at work in this process: the first is the way the control system is generated (by a machine) and the second is how it is accessed (by an artist). The result is a collaboration.

“You can use tech like that to add to the creative process, where the tech does one bit and humans do the other bit,” Wheeler explained.

For now, our tests of machine creativity tends to rely on one-off use cases, with benchmarks like Can a machine paint a decent picture? Can it compose an adequate symphony? And our understanding of creativity remains, as ever, couched in romantic terms.

But a future in which machines can combine different psychological processes, such as pattern recognition, reflective self-criticism and memory to produce creative output—just like humans do? That may not be so far-fetched.


Professor Serge Belongie’s mobile app using computer vision to identify bird species was recently featured in Tech Crunch. The app is the result of a research partnership with researchers from Caltech and the Cornell Lab of Ornithology.

Is that a bufflehead? A coot? Maybe a loon? Get close enough to take a picture and the Merlin bird identification app will tell you in seconds — sort of like a Shazam for would-be ornithologists.

The photo ID capability has actually been a part of the greater Merlin ecosystem for more than a year, but the Cornell birders behind it just recently added the ability to do it from the mobile app. Take a picture, zoom in and let the database do the work.

Read the full article on Tech Crunch.


After rising through the ranks to senior product manager for an Internet marketing giant, Ruth Sylvia decided it was time to put her career on pause and pursue an MBA.

She knew she wanted, on the one hand, exposure to a variety of industries, and on the other, a condensed program focused on tech and rooted in real-life experience. And she didn’t want to leave New York.

“The fact that, at Cornell Tech, you could learn from real practitioners appealed to me,” she said. The combination of industry engagement and the traditional MBA framework made the Johnson Cornell Tech MBA a perfect fit for Sylvia.

Now just a few months into her program, she is thrilled with her decision.

“I think my favorite part of going to Cornell Tech is the people and the projects we are introduced to,” she said. “Just by being in New York City with a focus on tech, the program attracts people who are really open to learning new things. There’s also a strong focus on solving real-life problems at companies.”

Ruth’s road to Cornell Tech

A native of Ann Arbor, Michigan, Sylvia studied history at Colgate University. There, she remembers first hearing about Internet marketing during a digital art class—when she was a senior. This was the early days of YouTube and Twitter, and ut she was struck by the way her teacher, a conceptual visual artist, described the Internet as a tool that would change not only peer-to-peer communication, but also how companies told stories.

So Sylvia left Colgate with a burgeoning interest in Internet marketing. After working briefly on the West Coast, she relocated to New York City and landed a job with online marketing giant Yodle just as the company was ramping up for growth.

“It was an amazing experience,” she remembered. “I started when they had 200 employees and 5,000 customers and left when they had 1,500 employees and 40,000 customers.”

In the course of six-plus years, Sylvia rose steadily through the ranks; when she left, in early 2015, she was a senior product manager.

Her time at Yodle was followed by a short stint in product management at Audible. But she felt herself to be at a crossroads.

“Up until that point,” she explained, “I’d followed one job to the next and never really took a moment to step back and gain exposure to a lot of industries. I started searching for MBA programs in New York City—and found Cornell Tech.”

Life as a student

These days, Sylvia’s life consists of shuttling between classes and projects. She is currently working on a “company challenge” in which she and four classmates (all from different disciplines) attempt to solve a real-world customer support problem.

The company in question? Google.

Google challenged the students to answer this question: “How might we use natural language processing and machine learning to improve the experience for customers who call Google for support?'”

So far, she and her team are exploring ways to replace customer service surveys with a program that allows computers to not only process speech but convert it into meaningful themes, identify sentiment, even uncover meanings the caller expressed but didn’t verbalize directly.

“It has been so much fun to jump into something I’ve never done before,” Sylvia said.

Equally exciting will be an upcoming trip this January, when she and a handful of other Cornell Tech students travel to Israel for their iTrek project. There, she’ll tackle real-world business challenges facing two companies: Wiseye, a retail tech company; and Eco-Fusion, a healthcare app.

“We are just starting to talk with the companies now, but I honestly can’t wait,” she said. “It will be really neat to experience their world for a brief moment, provide value and then pop back out.”


A mobile app developed by Professor Serge Belongie in partnership with Caltech and the Cornell Lab of Ornithology was recently released to iOS and Android devices. The app uses computer vision to ID bird species with a photo, Digital Trends reports:

If you’re a budding birder struggling to identify all the weird and wonderful feathered creatures you happen upon, the latest version of a free bird ID app could be just the ticket when you’re out and about.

Developed by Cornell Tech and California Institute of Technology computer vision researchers in partnership with the Cornell Lab of Ornithology, the powerful Merlin Bird ID app was built using machine-learning technology to help it instantly identify hundreds of different species from across North America.

Read the full article on Digital Trends.


Computer vision app can identify North American bird species from photographs

Ithaca, NY, New York, NY, & Pasadena, CA — The Merlin Bird Photo ID mobile app has been launched and, thanks to machine-learning technology, can identify hundreds of North American species it “sees” in photos. The app was developed by Caltech and Cornell Tech computer vision researchers in partnership with the Cornell Lab of Ornithology and bird enthusiasts. Because Merlin Bird Photo ID can be used on mobile devices, it can go anywhere bird watchers go.

“When you open the Merlin Bird Photo ID app, you’re asked if you want to take a picture with your smartphone or pull in an image from your digital camera,” explains Merlin project leader Jessie Barry at the Cornell Lab. “You zoom in on the bird, confirm the date and location, and Merlin will show you the top choices for a match from among the 650 North American species it knows.”

Caltech and Cornell Tech computer scientists trained Merlin to recognize birds by showing it nearly one million photos that were collected and annotated by birders and volunteers mobilized by the Cornell Lab. Merlin scans its photo database for possible matches. Then, like any good birder, the system considers species that would be found at that specific time of year and in that location using information from the eBird program which collects an average of seven million bird observation records each month from around the world.

“In building Merlin Bird Photo ID we were especially concerned with the quality and the organization of the data,” says Serge Belongie, professor of computer science at Cornell Tech. Together with professor Pietro Perona of Caltech, he is the co-founder of Visipedia, the Google-funded umbrella project that is using advances in machine learning and computer vision to help classify objects in photographs. “Ultimately we want to create an open platform that any community can use to make a visual classification tool for butterflies, frogs, plants, or whatever they need.”

“This app is the culmination of seven years of our students’ hard work and is propelled by the tremendous progress that computer vision and machine learning scientists are making around the world. All of a sudden, our smartphones can really see!’’ says Perona, the Allen E. Puckett Professor of Electrical Engineering in the Caltech Division of Engineering and Applied Science. “This was a distant dream when I was a graduate student and now it’s finally happening.”

How good is Merlin Bird Photo ID? “Accuracy is around 90 percent if the user’s photo is of good quality. Submit a fuzzy image or one in which the bird is small or partially covered by leaves and the odds of getting an accurate match go down,”’ says Caltech postdoctoral researcher Steve Branson.

Despite the high-tech advances, humans are still an important part of the process. “We need eBird data from bird watchers along with experts who can label the photos used to train Merlin,” says Caltech graduate student Grant Van Horn. “You need teachers to teach the machine what it needs to do. Our system combines the expertise of hundreds of birders and ornithologists.” Van Horn and Branson are both part of the Visipedia team, and developed the algorithms that allow Merlin to learn to recognize the birds.

There are more thrills to come. Just around the corner is a Merlin Bird Photo ID release in Spanish for birds in Mexico. Down the road, the Merlin team will produce versions for South America, Europe, Asia, Africa, Australia—all parts of the world.

“The wonderful thing about this project is the collaboration with the Visipedia team,” says Barry. “We have a product that really works because it’s supported by fantastic research and is great for the birding community because it’s built for birders by birders. You just have to try it!”

Photo ID may be downloaded free for iOS or Android systems from the Apple and Google Play app stores. It is now included in the Merlin Bird ID app, which was originally released in 2014 to identify birds by asking users five questions about the birds they saw.

Merlin Photo ID is powered by Visipedia with support from Google, the Jacobs Technion-Cornell Institute, and the National Science Foundation.


By Doug Stayman, Associate Dean

Cornell Tech has several “brand pillars” that drive the school’s content, curriculum, and strategic direction. One of these core pillars is “Tech for Impact”. Johnson Cornell Tech MBA students embody this concept through their backgrounds, experience, and passions.

Anna McGovern, Johnson Cornell Tech MBA ’17, holds an undergraduate degree in computer science. A New York City native, she spent three years working as a product manager and running an innovation lab for Citigroup, before electing to pursue her MBA at Johnson Cornell Tech.

Utilizing technology to effect social change resonates particularly strongly with McGovern, and the fall Company Challenge project allowed her to explore her interests this area.

“I was placed on a project with the New York City Mayor’s Office to combat domestic violence,” McGovern says. ”I am very interested in going into this social entrepreneurship space, and when I was put on the team, I realized how excited I was. Being from New York, I was thrilled to be put onto a project for the City and have that opportunity.”

The Challenge is to use mobile technology to proactively deliver information and tools to survivors of domestic violence. These tools, which offer safety and privacy protections, may mean the difference between life and death for some victims. The Mayor’s Office allowed McGovern’s team to visit Family Justice Centers located throughout the City, and conduct interviews with both victims and staff.

“It’s a really compelling Challenge,” McGovern says. “We’ve been able to go onsite, conduct interviews, and really get inside this world. I’m really not sure what we’re going to end up building, but I’m getting more comfortable with that, and just figuring it out as we go.”

""

Cornell Tech students wear purple in support of NYC Go Purple Day to bring awareness to domestic violence. Photo credit: Anna McGovern, Johnson Cornell Tech MBA ’17

Social entrepreneurship is just one area where current students are using technology for impact. Nikhil Swaminathan, Johnson Cornell Tech MBA ’17, is trained as a software engineer, and was involved in an educational technology startup in India before coming to Cornell. His team’s Company Challenge was to help x.ai, a virtual personal assistant company, to improve engagement with young users.

“x.ai offers a virtual personal assistant called Amy, that helps you schedule your meetings over email,” Swaminathan says. “The problem the company had was that young people struggle with the idea of having a personal assistant and don’t use it.”

Swaminathan’s team is comprised of two Johnson Cornell Tech MBA students, and two Cornell Tech Connective Media students. Team members began using “Amy”, and sought feedback from other students and startup founders. Based on their research, the team decided to develop a prototype version “with a personality”, where the virtual assistant interacts differently with individuals based on specific defining attributes.

“For example,” says Swaminathan, “if you are interacting with someone senior, you want Amy to be more respectful of the other person’s time and work your schedule around theirs. We worked around the clock for the first Sprint and had a successful presentation which was well liked by the mentors.”

The range of ways that technology can influence business and society is vast. This year’s Johnson Cornell Tech class is taking this brand pillar to heart, and finding new ways to positively impact our world.


As a young engineer, Stephanie Hannon liked certain aspects of her work but was yearning for a change.

She knew she liked transforming creative ideas into real products and tangible outcomes. And she knew she liked integrating teams, seeing how different departments—or differing opinions—could somehow come together to solve problems.

Those interests brought Hannon to product management. After working at Cisco, Facebook, and Google—where she eventually led Google’s Civic Innovation and Social Impact division—Hannon was hired in April 2016 as campaign CTO of Hillary Clinton’s presidential campaign, becoming the first female CTO of a major party’s presidential campaign.

She spoke about that journey to an audience of young, predominantly female technologists as part of Cornell Tech’s Fall Conversation, a program hosted by Women in Technology and Entrepreneurship in New York (WiTNY), an initiative that supports young women pursuing careers in technology.

Making an Impact

Leveraging technology to drive impact has been at the center of Hannon’s career.

For example, she decided to leave Google’s Gmail team for Google Maps. There, Hannon recounted: “One of the most important things we did was map places that had probably never been mapped before. We built tools that could warn people about natural disasters and weather that’s coming at them.”

She also worked with city agencies across Europe, helping them open up their transit data so that it could be analyzed and used to improve the daily lives of citizens.

“I traveled all over Europe,” said Hannon. “It was fun and impactful to create an open standard for cities to share public transportation.”

At first, she said, many European cities were skeptical. But, after successfully piloting the system in Portland, Oregon, Hannon and her team were able to convince most to come on board.

The same desire for impact brought Hannon to Clinton’s campaign, where, aside from gaining the title of CTO and having the opportunity to build an engineering team from scratch, she knew she would be working on the first major party presidential campaign with a woman at the top of the ticket.

Making a Career

Hannon spoke about some of the ways she has evolved professionally—starting with her approach to problem solving. “It’s a skill we all have to develop,” she said. “You can’t be afraid of conflict. If there are different views, you need to be able to talk with the other person to get to a resolution.”

Crystal Aya, a CUNY student and former WiTNY intern, said Hannon’s story helped her understand, piece by piece, how such a career is actually built. “When I heard how she went down her path—to me, it was surprisingly human, it was all based on what she wanted to do,” said Aya. “It’s a relief hearing someone like her say something like that.”

Brittany Grieve, another panelist and a freshman majoring in Computer Science at Hunter College, also expressed a feeling of relief following Hannon’s talk. “There’s so much pressure on women in technology fields to succeed,” Grieve said. “And a lot of women, it seems, feel pressured to take roles just to prove that they can, not necessarily because of what they want.”

But Hannon encouraged those in attendance to follow their passions and focus on experience. “A lot of big companies might try to woo you out of being an engineer right away,” she said, referencing the tendency among tech companies to quickly transition engineers with creative minds into product management positions. “Fundamentally, engineering—building systems for the real world—is such an important experience to have. That experience—building real production systems, having the world use them, seeing them fail, and then learning to debug them—is priceless.”

As for the future, Hannon’s message to the audience was clear: “We need more senior women technology leaders. That’s the only way things will change. We need more role models.”


Miscommunications via smartphones have been a running joke for nearly a decade: Mistyped and missing words, unfamiliar slang and acronyms can sometimes make for comical conversations.

But even when a message is communicated in complete sentences, we often misjudge the author’s intentions and current emotional state.

It’s this disconnect that three Cornell Tech students, Hsiao-Ching Lin, Huai-Che Lu, and Claire Opila — all graduating with  Technion-Cornell Dual Masters Degrees in Connective Media in spring 2017— are aiming to solve. Their solution is an iOS or Android smartphone keyboard app called Keymochi.

How it works
Keymochi uses data like typing speed, punctuation changes, the amount of phone movement, distance between keys, and a user’s rough sentiment analysis to detect emotions.

That means that as a user is typing out a text message or email via smartphone, each movement adds to an emotional profile of the user. In addition, users can select one of 16 pictures to indicate their mood by using a photographic affect meter, or PAM, tool.

Once the user is finished typing their message, the data is automatically encrypted and uploaded anonymously to the Keymochi database, where the team can start to build a user-specific machine-learning algorithm.

To protect privacy, Keymochi does not store what is typed, just how it is typed—the physical cues and the sentiment analysis from PAM.

“We have a rough prototype that we built last semester with exaggerated emotional data based on ourselves,” said Opila. “But now we’re going through the process of extending the technology and we’ll be conducting longer studies with other students.”

So far, the app is able to predict emotions with 82 percent accuracy.

""

Improving mental health
The idea for Keymochi emerged out of a desire to build a mental health-focused application that could be configured for any number of scenarios. Once they had settled on a direction, the team studied the latest research from institutions such as MIT’s Media Lab to learn more about affective computing.

“The subject is fascinating—subtle changes within someone’s facial expression can change their typing pattern,” said Lu. “After reading that, we started to look into ways to collect emotional data and develop a machine-learning algorithm.”

Because each of us have different ways of typing and forming sentences, Lu added, it would be erroneous for Keymochi to operate with a standardized set of assumptions.

“While there might be a baseline point for each user, over time each user’s unique interactions with their phone would contribute to a personalized data set,” Lu said.

Providing emotional support
Eventually, the Keymochi team hopes their app will be able to apply circumstantial information for a more accurate emotional reading.

“If someone is sitting on the subway or at their desk in an office, we’d also collect the contextual data, like the location and the time, to enhance the machine learning results,” said Lin.

This, in turn, could support other connected experiences. For example, if Keymochi detects someone is sad—when that person walks into their home, Opila noted, the lights might adjust to a more cheerful brightness or a connected device could play their favorite song.

""

From left to right: Claire Opila, Hsiao-Ching Lin, Huai-Che Lu

Commercial applications
Though the team had originally been developing an application for the mental health field, rigorous testing and feedback from Cornell Tech mentors and students made them consider a wider range of use cases for the application.

“We know that companies often record conversations when customer service agents talk to customers,” said Lin. “With the Keymochi keyboard, agents might be able to understand ahead of time what kind of mood the customer is in—and make recommendations for appropriate actions and dialogue during the call. At the very least, it could help a customer service agent be prepared for a difficult experience.”

Lu said he hopes that Keymochi could eventually be a customizable customer service tool, so that many different companies can use it for various needs.

“We’re thinking about creating a software development kit that developers can use for their own apps,” said Lu. “By implementing our application, it would help software companies learn more about their own customers and help them develop a better product.”

As the team continues to build out the app, one thing is for certain: Applying affective computing to everyday life represents an enormous opportunity.

“There’s a market exploding for emotionally-driven services and communications,” said Opila. “With this highly personalized system that’s tailored to the user, we can understand someone’s emotions in a variety of situations throughout the day.”