Visit

AJ Capital Partners and Cornell University announced plans to build a hotel on the university’s new Cornell Tech campus on Roosevelt Island. Cornell Tech, which will open the first phase of the new campus September 2017, is a revolutionary model for graduate education, extending Cornell University’s research and academic prowess to the heart of New York City by bringing together faculty, business leaders, tech entrepreneurs and students in a catalytic environment to produce visionary results.

AJ Capital Partners will open “Graduate Roosevelt Island,” a 196-room ground-up hotel development with panoramic views of the Manhattan skyline in 2019. Designed by world-class architectural firm Snohetta, the hotel will be located in the heart of the campus alongside the Verizon Executive Education Center, which will have conference, executive program and academic workshop space, also opening in 2019.

“Graduate Roosevelt Island is the type of transformative project that has come to epitomize our company,” said Ben Weprin, founder and CEO of AJ Capital Partners. “The hotel will sit prominently at the gateway to Cornell Tech, the most innovative academic campus in the world; and will offer guests, campus visitors and Roosevelt Island citizens a truly distinctive and unique experience. We couldn’t be more excited to partner with Cornell University and to introduce Graduate Hotels to New York City.”

“Cornell is thrilled to partner with AJ Capital Partners for a hotel on our Roosevelt Island campus. Their unique aesthetic combined with the world-class architecture of Snohetta will be an asset for Cornell, the Roosevelt Island community and New York City,” said Dan Huttenlocher, Cornell Vice Provost and Dean of Cornell Tech. “The Graduate Roosevelt Island hotel will reflect the unique history of the island and, along with the Verizon Executive Education Center, it will create a place where the entire tech community can convene in New York City, expanding the impact our campus will have on technology beyond its degree programs.”

Created for travelers who seek memory-making journeys, Graduate Hotels are part of a well-curated, thoughtfully crafted collection of hotels that reside in the most dynamic, university-anchored markets across the country. Every property celebrates and commemorates the optimistic energy of its community, while offering an extended retreat to places that often played host to the best days of our lives. Locations include Ann Arbor, Mi.; Athens, Ga.; Charlottesville, Va.; Madison, Wi.; Oxford, Ms.; and Tempe, Az., as well as Berkeley, Ca.; Lincoln, Ne.; Minneapolis, Mn; and Richmond, Va. slated to open in 2017, and Bloomington, In. and Seattle, Wa. in 2018.

The hotel’s iconic slender profile marks the entrance to the campus while offering guests unobstructed views of the skyline through floor to ceiling windows. The hotel’s comfortable residential aesthetic will reference the history of Roosevelt Island, while also capturing the spirit of innovation inspired by the Cornell Tech campus. The property will include a full-service restaurant, rooftop bar with expansive views of Manhattan, and 5,200 square feet of flexible meeting and event facilities for hotel guests and the Roosevelt Island community.

Opening September 2017, the first phase of Cornell Tech’s Roosevelt Island campus will include three buildings: The Bloomberg Center, the campus’ first academic building; The Bridge, a building for innovative companies to locate on campus; and The House, a residential building for students, faculty and staff that aims to become the world’s first high-rise residential building constructed to “Passive House” energy efficiency standards.

When fully completed, the campus will span 12 acres on Roosevelt Island and house approximately 2,000 students and hundreds of faculty and staff. The campus master plan was designed by Skidmore, Owings & Merrill with James Corner Field Operations, and includes a number of innovative features and facilities across a river-to-river campus with expansive views, a series of green, public spaces, and a seamless integration of indoor and outdoor areas. The campus will be one of the most environmentally friendly and energy-efficient campuses in the world.

For more information on Graduate Hotels, please visit www.graduatehotels.com.

About AJ Capital Partners

Adventurous Journeys Capital Partners, based in Chicago, is an accomplished team of hospitality and real estate investors whose innate passion is to create a one-of-a kind portfolio of timeless assets. The counter-culture investors acquire, design and develop transformative real estate throughout the United States, Mexico, and the Caribbean. In fall 2014, AJ Capital Partners launched the Graduate Hotels collection. AJ Capital Partners continues to grow its portfolio of lodging investments, firmly establishing the group as visionary leaders in the lifestyle-driven investment industry. For more information on AJ Capital Partners, please visit www.ajcpt.com.

About Cornell University

Located in Ithaca, N.Y. and New York City, Cornell is a private, Ivy League university and the land-grant university for New York State. Cornell’s mission is to discover, preserve, and disseminate knowledge; produce creative work; and promote a culture of broad inquiry throughout and beyond the Cornell community. Cornell also aims, through public service, to enhance the lives and livelihoods of our students, the people of New York, and others around the world.

About Cornell Tech

Cornell Tech brings together faculty, business leaders, tech entrepreneurs, and students in a catalytic environment to reinvent the way we live in the digital age. Cornell Tech’s temporary campus has been up and running at Google’s Chelsea building since 2013, with a growing world-class faculty, and more than 200 masters and Ph.D. students who collaborate extensively with tech-oriented companies and organizations and pursue their own start-ups. Construction is underway on Cornell Tech’s campus on Roosevelt Island, with a first phase due to open September 2017. When fully completed, the campus will include 2 million square feet of state-of-the-art buildings, over 2 acres of open space, and will be home to more than 2,000 graduate students and hundreds of faculty and staff.


A interdisciplinary team of Cornell Tech, Cornell and New School students recently took home the grand prize at MIT’s FinTech Hackathon.

The team included: Sindhu Babu, Connective Media ‘18; Patrick Baginski, Johnson MBA ‘17; Aamer Hassanally, Johnson Cornell Tech MBA ‘17; Abhiram Muddu, Johnson Cornell Tech MBA ‘17; Mario Rial, Computer Science ‘17; Constantin Scholl, Computer Science ‘17; Brinna Thomsen, Parsons Communication Design ‘18.

The Winning Idea: Switch

Fifty-five million American freelancers cite income volatility and the cost of self-insurance as their biggest barriers to doing more work. Switch is an intelligent digital broker that recommends personalized liability insurance based on a worker’s gig profile. On-demand coverage lets users save money by insuring themselves only while they are on the job. With quick onboarding, simple terms and effortless claims, freelancers can spend less time covering losses and more time earning money.

By the end of the hackathon the team had built a functional prototype of the app, using Even Financial’s API to recommend pre-approved financial products to users in real-time.

The team drew on their experience in the Cornell Tech Studio to help guide their product development and narrative.

“In the Studio we learn to tell a narrative about our product that draws on computer science, design and business,” Aamer Hassanally, Johnson Cornell Tech MBA ‘17 said. “Being able to do that successfully as a team is what I think gave us an edge at MIT.”

Team Switch will continue to work on developing their product in Startup Studio this spring and hope to spin out the company after graduation. So stay tuned!


Our innovative K-12 Teacher-in-Residence program is bringing CS education to classrooms across the city, Tech & Learning reports.

Since the beginning of the 2016–17 school year, Meg Ray, a Cornell Tech Teacher-in-Residence, has been “providing content coaching, curriculum consultation, and professional development on a weekly basis for teachers in all grades” at PS/IS 217, where principal Mandana Beckman is committed to incorporating CS instruction into every classroom in grades K–8.

“We have just started working with the middle school on CS integration in their science classes,” Ray says. “The goal is to deepen understanding of both subjects by building on prior CS experiences to support synthesis of new science content.” Ray sat down with science teacher Emily Wong in December, and they co-designed a computing project to complement her existing sixth-grade unit on ecosystems. Wong has never taught coding or CS, and Ray, a former classroom teacher herself, appreciates that Wong is “open to collaborating and modeling the learning process for her students.”

This project, a light-up, talking poster about biomes, combines “making with paper circuits, coding in Scratch, and physical computing with Makey Makey.” The students had attended community events held by Cornell Tech introducing making and paper circuits, so Ray knew that “bringing this type of hands-on work into the classroom would build on prior knowledge and be highly motivating.” The students feel at home with the technology and find the curriculum both “rigorous and fun.” As they build knowledge and confidence they’ll move on to more advanced projects, Ray says, such as “creating animations or programs that control robots” using Raspberry Pis and data collected with sensors.

Read the full article on Tech & Learning.


Johnson Cornell Tech Professor Roni Michaely’s research about American Capitalism was featured in the Wall Street Journal.

New research by economists Gustavo Grullon of Rice University, Yelena Larkin of York University and Roni Michaely of Cornell University argues that U.S. companies are moving toward a winner-take-all system in which giants get stronger, not weaker, as they grow.

That’s the latest among several recent studies by economists working independently, all arriving at similar findings: A few “superstar firms” have grown to dominate their industries, crowding out competitors and controlling markets to a degree not seen in many decades.

Let’s look beyond such obvious winner-take-all examples as Apple or Alphabet , the parent of Google.

Read the full article on Wall Street Journal.


Trevor Pinch, the Goldwin Smith Professor of Science and Technology Studies, spent the fall 2016 semester on sabbatical at Cornell Tech in New York City, where he began conducting research with the Connective Media hub, which focuses on social technologies and the role of new media. It was there he began collaborating with Serge Belongie, professor of computer science at Cornell Tech, who had been working on “deep learning” research and fine-grained visual categorization.

Belongie’s research – teaching a computer to visually recognize what species a bird is or what specific type of plant or flower something is, as opposed to just recognizing that something is a plant or flower – had all been situated in the natural world. After talking with Pinch, he began to think about how to apply recognition approaches to man-made objects – in this case, musical instruments.

Pinch had an ongoing collaboration with a group at École Polytechnique Fédérale de Lausanne (EPFL) who had digitized 50 years of audio and video from the Montreux Jazz Festival in Switzerland. The performers and concerts at Montreux were all known, but not the musical instruments they used. Pinch imagined finding a way for computers to identify those instruments based on the audio recordings alone, but as he began collaborating with Belongie, he realized that exploring machine learning possibilities using visual recognition was more workable, and held more immediate promise, for applications of the technology.

Is it easier to teach computers techniques of visual recognition, rather than audio recognition?

""

Belongie: The machine learning community is putting a lot more effort into visual recognition. One thing that’s well-known about all these deep-learning approaches is that they require gigantic amounts of training data, and the infrastructure is well developed for people to annotate, or tag, images for these large training sets. There are certain characteristics of audio that make it trickier to annotate. For example, in the case of images, you can show an annotator 25 thumbnails on a screen all at once, and the human visual system can process them very quickly and in parallel, and you can use efficient keyboard shortcuts to label things. Imagine playing 25 audio clips at the same time – making sense of that is really hard, and it just raises the burden of annotation effort.

Pinch: So that’s why we’ve been focusing on visual. This point about tagging is very interesting, because I’m new to this whole business of computer recognition of anything. My background is in science and technology studies and sound studies. Originally I had started collaborating with that group in Switzerland; the original idea was to use that Montreux Jazz Festival collection. But then I realized, the issue there is they have lists of who the performers were at every concert, but they haven’t got lists of the instruments. So first, somebody has to basically go through and tag all the instruments in the Montreux collection before you can start to train a computer.

One of the reasons we have been able to make progress on this project is that we found a website where visual images of musical instruments are tagged already, kind of crowdsourced, and people send in videos or still images of musical instruments. And that’s the source of data we’re using to begin with as a training set.

Belongie: We identified an undergraduate who started to poke away at the project a bit, and now there’s a Ph.D. student, too, who is working on it this spring.

Does your work feel like a radical collaboration, crossing and combining the humanities with technology?

Pinch: I think this is indeed a radical collaboration. I had no idea how many advances there had been in the field of computer-based visual recognition. And it has happened pretty rapidly, it seems to me.

Belongie: The media tends to get a bit breathless whenever they’re talking about “deep learning,” that it’s as if you just push this button and it solves everything. So perhaps the radical aspect of this is that we aren’t putting deep learning on a pedestal. Instead, we’re incorporating it as a commodity in a research pipeline with many other vital components.

A lot of that excitement in the AI [artificial intelligence] community comes from engineers making gizmos for other engineers, or products that are targeted toward a geeky audience. And I think in this case, even though admittedly it’s a geeky corner of the music world, these are not computer [scientists] we’re targeting. We’re talking about archiving high-quality digital footage, and for it to be a success, it needs to enable research in the humanities. This work isn’t primarily geared toward publishing work in artificial intelligence.

""

Pinch: I wrote a book [“Analog Days”] about electronic music synthesizer inventor Robert Moog, Ph.D. ’65, and one of the interesting things for me was that Moog, in the foreword to my book, wrote that synthesizers are one of the most sophisticated technologies that we as humans have evolved. And actually, that’s quite an insight – thinking of a musical instrument as a piece of technology enables me to apply all sorts of ideas that I’ve been working on in the field of science and technology studies and the history of technology. Of course, pianos are mass produced, synthesizers are mass produced, and they are kind of machine-like; can this start to broaden the perspective on what is a musical instrument?

Another thing I’ve been talking to Serge about is that one of my instincts was to start this project by getting some well-known musicians and interview them about how they recognize instruments visually, how they would do this task – how does a human do this task? And it’s very interesting because that’s not the approach we’re following. I learned something from that: The computer is learning in a different way than how a human would.

And that’s a very interesting thing for a humanist to discover, how this visual recognition works. And it stretches and widens my own thinking about what musical instruments are and how we should start to think about them; maybe “musical instrument” is the wrong term, and we should start thinking about them as “sound objects” because we’re including headphones, microphones and any piece of music gear.

Belongie: And, on the flip side, computer scientists have a lot to learn about how experts in their respective fields learn. In deep learning, [we’re] still embarrassingly dependent on labeled training data. For example, if you want a machine to recognize a black-capped chickadee, you probably need to show the machine hundreds of examples of that bird under all sorts of viewing conditions. There is a lot of talk in our field about how we should move toward what is known as unsupervised learning. We know humans make extensive use of unsupervised methods, in which we learn about the world simply by grabbing things, knocking them over, breaking things – basically making mistakes and trying to recover from them.

I felt inspired to watch a live concert video recently. If you mute the performance, just turn off the sound and watch, say, a five-minute performance of Queen, it’s surreal. You don’t hear the audience, you don’t hear the incredible music, but if you look at it through the lens of a computer vision researcher, you’re actually getting a whole bunch of views of all the performers and their instruments, close up, far away, lots of different angles … it’s quite an opportunity for computer vision researchers, because you have these people on a stage exhibiting these distinct objects for several minutes at a time from all these different angles and distances. I wouldn’t dare say it should be easy, but it should be way easier than the general problem of object detection and recognition.

""

Pinch: For me, the computer science perspective is an incredibly radical perspective in that it’s leveling the field and making us think anew about something that, as a humanist, I’m very familiar with from one perspective – from the history of music. I’m in this project basically as an academic. If there are commercial possibilities, great, but I’m in it for the intellectual fulfillment of working on such a project and its research.

I keep remembering something I learned from Bob Moog: If you interact with people who are incredibly technically skilled in an area that you’re unfamiliar with, you can, in an open-minded way, ask smart questions and learn all sorts of interesting things.

Is there anything else you have learned about each other or how each of you views the world?

Pinch: I think we’re learning stuff all the time. One of the things I learned in an early conversation with Serge was about how computer scientists like classification and that initially, the world of musical instruments looked like it may be too messy a world because classifying instruments depends on the manufacturers, particular varieties; it’s not like scientific classification.

There’s very little logic to it other than: This is what people over time have found has worked. Computer scientists generally tend to like to work with more vigorously categorized data.

Belongie: Yes – that is largely driven by the requirements for getting work published. Everything needs a crisp, concrete label. I think it could be liberating in this collaboration if the targets for publication could just be completely outside of computer science and have different metrics of success.

Pinch: In the humanities, we have other ways of classifying – I’m very influenced by philosopher Ludwig Wittgenstein, and he had this notion of “family resemblance” in classification … so that’s how I tend to think about classification, that’s another way into it.

How might this research have real-world uses?

Pinch: This ability for a computer to visually identify musical instruments could have applications for education. It could help archivists – instead of labeling these things manually, a computer could do it, tagging instruments.

I really see the technology itself – an app, if we develop an app, or teaching the computer to do this with a piece of software – as the main product from this collaboration.

Belongie: This research is not going to replace musical archivists, but there’s a tremendous amount of power, that – if harnessed correctly – can help those people do their jobs. The ball is in my court to get some kind of preliminary results out, and then the learning will begin.

Through my work with developing the Merlin Bird Photo ID app with the Lab of Ornithology, I have witnessed firsthand the passion of the birding community and how that allows us to build up large, crowdsourced data sets and constantly stress-test the system. A big part of that collaboration was that we had such a large, highly energized fan base that was basically begging us to take their photos and analyze them.

Pinch: A project like this is kind of perfect for application and engagement of wider communities – which is one of the themes of our campuses. You can think of other areas where you could have visual recognition of gear: high-end sailing, mountaineering, other technical gear. This technology could lead to something that can be put into the hands of people and be useful.

This article originally appeared in The Cornell Chronicle.


In a recent Wired article, Professor James Grimmelmann explained what Facebook’s new role as a media company would mean for its technology and its ability to remain neutral. 

Not that Facebook hasn’t exercised editorial judgment before. “Facebook has never been a neutral platform,” says James Grimmelmann, a professor of law who studies social networks at Cornell Tech. “It has always helped some content spread better than others.”

Facebook’s technical and social decisions have had an observable impact on content before, Grimmelmann says. Supporting long, silent GIFs helped make cooking videos a viral genre (a technical construct), for example, and the company’s algorithmic prioritization of clicks helped salacious content rise to the top (a social one). “The fact that Facebook will be an explicit content creator won’t change the fact that it’s still going to pick winners and losers among content creators,” he says.

Read the full article on Wired.


Roy Cohen is a self-described skeptic when it comes to the conversations that pundits are having about the future of artificial intelligence.

But that wariness hasn’t stopped Cohen, Technion-Cornell Dual Master’s Degrees in Connective Media ’18, from diving into the field, first as a technologist and filmmaker, now as a graduate student enrolled in the Connective Media program at Cornell Tech.

His feature-length documentary Machine of Human Dreams, released in 2016, was awarded Best Technology Film of the Year from Russia’s Polytechnic Museum, and is currently touring the festival circuit. Cohen’s first feature length film was selected to some of the most prestigious documentaries film festivals, and was picked up for distribution by the British agency Dogwoof.

For Cohen, making Machine of Human Dreams was a three-year odyssey around the world: from MIT robotics labs in Cambridge, Massachusetts, to investor meetings in Hong Kong. The film is a meditation on sentience, as well as an astonishing character portrait of the eccentric genius Ben Goertzel — a man obsessed with bridging the gap between human and machine intelligence.

""

A look inside Sophia, an artificially intelligent robot being developed by Ben Goertzel

Despite the cutting-edge nature of Cohen’s subject and research, he insists he is less interested in the potential of emerging technology than the human passion that creates and drives that technology forward.

“I was fascinated by people who have that kind of caliber of intellect — in the way that Ben [Goertzel] does, in that specific way — and choose to use it to build machines that can transcend humans,” Cohen said. “That’s really what motivated me to dedicate three years of my life to this film: curiosity.”

Curiosity is also what led Cohen to Cornell Tech, where he is studying data analytics, machine learning and computer vision.

“When I came to Cornell Tech, I was looking into — let’s call them vernaculars,” said Cohen. “Learning about the machine learning aspect, and how entertainment is changing to adapt to what users like and what users want to watch. Learning about new horizons in storytelling.”

These new horizons, as Cohen calls them, entail a fundamental shift in the way media is digested. Consumers are more and more likely to be targeted by content (rather than actively seeking that content out); data is being used as the basis for creating content (perhaps changing the nature of that creative act); and the boundaries between art and brand (as well as consumer and creator) are increasingly blurred.

“I definitely see a trend in technology companies going into entertainment,” said Cohen. “Technology companies, as opposed to entertainment or production companies, are all about targeted content, and maximizing exposure and visibility of their content.”

True to the interdisciplinary spirit of Cornell Tech, Cohen views his research into emerging technologies and analytic algorithms as tools to help him fulfill his unique creative vision, rather than paradigms which will define it.

A newcomer to coding, Cohen said going to grad school has opened up new paths for him. “It has given me tools and a new way of thinking. It has given me a new language. Eventually, I hope, I will find a way to translate this experience to new ways of storytelling.”


Jia Zheng knows that design is crucial to a product’s success — aesthetic concerns, like the right color or shape of a new tool, are often determining factors in how that tool is adopted.

That’s why Zheng is so enthusiastic about Cornell Tech’s Product Studio, where she and a cohort of Parsons School of Design students were placed on teams with Cornell Tech masters students.

A former in-house designer for Guardian Life Insurance, Zheng is aware of the difference between assignments that originate in the classroom and those that come from real challenges in the workplace today. She was delighted when her team was tasked with designing better machine learning interfaces for Google’s customer service centers. Sitting in on meetings directly with Google, hashing out Google’s needs and presenting her team’s ideas — this was exactly the kind of experience Zheng sought.

“I know the difference between making up a problem and a real-world problem,” said Zheng. Design has limitations like budget, timeline, advertising and real-world users. Without those limitations, the lines between art and design can blur. “The more real [projects] get, the more valuable it gets.”

Making ideas visual

Zheng’s role, as the designer on the four-person Cornell Tech Product Studio team, was to distill ideas visually — everything from building presentation slides to iterating prototypes in Adobe Experience Design.

“My job is to simplify, and get the most important ideas out because we don’t have a lot of time,” said Zheng. “If you just present data and a graph, people will forget in seconds.”

""

A customer service agent manager browses the overall performance of the team from the historical performance tab.

Design is at the core of every iconic product, from Coca-Cola’s bottle to the iPhone. At Cornell Tech this philosophy is put to work, where 15 Parsons School of Design students were embedded in Product Studio teams, embracing the role of design in product development from beginning to end.

“A well-working team does not have a business person, a programmer, a lawyer, and a designer; it has four founders working together to design a product,” said Justin Bakse, assistant professor of interaction design with Parsons School of Design.

“Each member’s background and expertise informs the [design] process,” Bakse said. “Business, legal and technical concerns provide constraints that help guide the designer.”

Designing new measurement

Google challenged Zheng’s team to help the company analyze customer service experience. In the past, Zheng explained, Google had surveyed customers about their experience after interacting with customer service, but they only got about 15% of people — or one in seven — to complete the survey.

With so few responses, Google was concerned the survey responses weren’t comprehensive or accurate enough to be used as a basis for assessment and improvement.

Zheng’s team built a solution using advanced natural language processing and sentiment analysis to detect how happy, or unhappy, a customer is while on the phone with customer service. That means satisfaction with customer service can be measured live, in real time — no more surveys at the end of the call.

""

Zheng worked with four Cornell Tech students during a Product Studio last fall.

Beyond enjoying the work with Google, Zheng said she feels the Cornell Tech team respects her role as a designer.

Parsons’ Bakse echoes that sentiment: design is not just a piece of product development, but crucial to the entire experience.

“It is not only important for designers to be part of this process, it is impossible for them not to be,” he said.


What do Airbnb hosts write in their profiles to help potential guests to trust them?

Cornell researchers will present a paper on this question at the 20th Association for Computing Machinery Conference on Computer-Supported Work and Social Computing, scheduled for Feb. 25 through March 1 in Portland, Oregon. The paper, “Self-disclosure and Perceived Trustworthiness of Airbnb Host Profiles,” has received an honorable mention for best paper at the conference.

Authors Mor Naaman, associate professor of information science at the Jacobs Technion-Cornell Institute at Cornell Tech; Xiao Ma, doctoral student in the field of information science, Cornell Tech; Kenneth Lim Mingjie ’15; and former Cornell communications professor Jeff Hancock, now at Stanford University, studied the perceptions of trustworthiness in Airbnb host profiles. They used a mixed-methods research approach, combining qualitative analysis, large-scale annotation and online experiment to find out what hosts write about, how trustworthy they seem and whether these perceptions lead to choice of host.

“We are very interested in trust and how it’s formed online, as it will enable the next generation of peer-sharing and shared economy services,” Naaman said. “Airbnb is a great example with a publicly available dataset that allowed us to start examining this topic in depth.”

When researchers asked people to rate Airbnb profiles for trustworthiness, they found the longer the profile text, the more trustworthy it is perceived to be. But length isn’t everything: Not all topics are created equal. The language of hospitality (e.g. “We look forward to hosting you”) is most effective in establishing the perception of trustworthiness rather than listing a life motto as suggested by Airbnb.

In addition, signaling theory predicts that hosts show trustworthiness by disclosing more about their origin, residence, work or study, which are more difficult to fake than interests or beliefs.

“Trust is deeply intertwined with safety,” said Ma, lead author. “Guests want to know if they’ll be safe, treated well and the property is well maintained, etc. We found that profiles which signal hospitality end up being more successful. A show of hospitality is an explicit gesture that is directly relevant to the transaction.”

As part of their research, the team produced the first systematic coding scheme and accompanying dataset for analyzing self-disclosure in online profiles.

“It would be great if Airbnb and peer-sharing communities could formalize or commoditize these findings,” Mingjie said. “Trust is a modality that is a lot of time based on physical appearances. When we are faced with a paragraph of text online, it would be wonderful to have some alternative signals to make good decisions.”

The full paper is available from the Social Technologies Lab’s website, where the researchers also made available all the data used for the study.


Vikram Krishnamurthy, Professor in the School of Electrical and Computer Engineering (ECE) at both Cornell Engineering in Ithaca and Cornell Tech in New York City, is taking what he knows about statistical signal processing and applying it to human decision making in social networks. Krishnamurthy comes to Cornell after many successful years at the University of British Columbia (UBC) in Vancouver, Canada.

“Cornell is an exciting place to be,” says Krishnamurthy. “ECE has a very strong research group which works in areas similar to mine, and my affiliation with Cornell Tech means I can be involved in this revolutionary new approach to graduate education.”

""

Krishnamurthy received his undergraduate degree in electrical engineering from the University of Auckland, New Zealand. He earned his Ph.D. from the Australian National University, Canberra and then taught at the University of Melbourne for ten years before moving to UBC, where he was a professor and Canada Research Chair in Signal Processing in the Department of Electrical and Computer Engineering.

Statistical signal processing is part of the larger field of Electrical Engineering. Signals of all sorts are treated as stochastic—meaning they evolve unpredictably over time. The reasons behind the unpredictability appear random to an observer at the receiver of the signal. Researchers who work in statistical signal processing use mathematical and computational tools to help separate the signal from the noise—to erase the randomness, in effect. As evidenced by Krishnamurthy’s work, there are useful and important applications for statistical signal processing in a broad range of scientific fields.

These days, Krishnamurthy has two main research threads that pique his curiosity and occupy his time. The first, as mentioned above, is applying statistical signal processing as a tool to understand and predict human behavior in social networks. The other uses statistical signal processing and stochastic optimization/control to improve the abilities and performance of biosensing devices.

“About eight or nine years ago I started thinking more about behavioral economics and how I might be able to use what I knew about signal processing to combine it with the ideas underlying behavioral economics,” says Krishnamurthy. Behavioral economics is a way of analyzing human economic decision-making by taking into account psychological insights into human behavior. “It makes sense because at one level, people really are complex social sensors and I was accustomed to working with sensors.” Krishnamurthy began to work with a Vancouver company that used machine learning to help maximize YouTube video views for their clients. He was inspired by what he learned from that experience to ask the bigger question: How can we understand human beings as social sensors who interact and influence each other?

“The typical 18-year old spends up to six hours a day on social media,” says Krishnamurthy. “All those interactions have an effect. And we can use mathematical analytical tools to help explain some of the effects.”

Highlighting the broad range of fields where statistical signal processing can be useful, Krishnamurthy has also been collaborating with the creator of a bio-electronic device consisting of an artificial cell membrane and an exquisitely sensitive general purpose sensor. “This device is able to detect substances in miniscule quantities,” says Krishnamurthy. “There are many possible uses for the device.” Krishnamurthy’s research helps the device identify quickly and accurately the source of signals detected by the sensor. “It could be used to look for the presence of explosives or certain pathogens or to test stem cells for flaws. It has the potential to be incredibly useful.”

Krishnamurthy is happy to be in New York, delving into questions that excite him. “I am looking forward to graduating outstanding students,” says Krishnamurthy. “That is the biggest and best legacy for any academic researcher. I am also excited to pursue my research into the intersection of behavioral economics and signal processing as well as my research at the human—sensor interface. This has been an exciting move for me and I am looking forward to what I can accomplish here with students and in my research. Also at Cornell Tech we are designing a brand new Masters program from scratch which has exciting possibilities.”