Updates
Seeking Applicants for New NJIT/NYU Postdoc Position
We’re thrilled to invite applications for a postdoctoral associate in human-centered machine listening. The position is funded by a National Science Foundati...
NSF Grant on Accessible Captioning of Non-Speech Information
Mark Cartwright, Magdalena Fuentes, and Sooyeon Lee receive new $800,000 NSF grant on accessible captioning on non-speech information. The grant, titled Coll...
Sensorium X Premiered
In collaboration with Luke DuBois at the NYU Ability Project and Max Morrison and Cameron Churchwell, NJIT/SInC’s Mark Cartwright, Danzel Serrano, and Michae...
New article in Frontiers in Computer Science
We have a new article on the current state of NSI captioning research, professional practice, and user preferences in Frontiers of Computer Science (Human-Me...
New article in Trends in Ecology and Evolution
Mark Cartwright contributed to a new article on individual identification in acoustic recordings led by Elly Knight published in the journal Trends in Ecolog...
Panel Moderator at AES Symposium on AI and the Musician
Mark Cartwright will be moderating a panel on ‘How will generative AI reshape music?’ at the AES Symposium on AI and the Musician.
MARL Seed Award
Along with Magdalena Fuentes and Sooyeon Lee, we received a $8,000 MARL Seed Award to fund preliminary work in accessible audio captioning.
SInC Hosting NEMISIG 2022
We are hosting NEMISIG 2022 at NJIT on June 4, 2022. Learn more here.
Tutorial at ISMIR 2021
Mark Cartwright is presenting a tutorial at ISMIR 2021 with Rachel Bittner and Ethan Manilow called Programming MIR Baselines from Scratch: Three Case Studies.
Paper Award at WASPAA 2021
Big congratulations to Yu Wang on receiving a WASPAA Special Best Paper Award for our paper co-written with Juan Pablo Bello and our collaborators at Adobe R...
Two Papers at WASPAA 2021
We have two new papers at WASPAA 2021 this year:
Two Papers at ICASSP 2021
We have two new papers at ICASSP 2021 this year: “Few-Shot Continual Learning for Audio Classification” led by Yu Wang. This paper investigates how to exp...
SInC Received an NJIT Faculty Seed Grant
SInC received a $7,500 NJIT Faculty Seed Grant on Open World Sound Event Recognition in Longitudinal Audio Data. We will use this money to fund preliminary r...
Hiring PhD students for Fall 2021
I am hiring PhD students to work with me at NJIT starting in Fall 2021.
New NSF grant and postdoc opening
We recently got a new NSF grant on spatial sound scene description, and thus we need a new postdoc at the NYU Music and Audio Research Lab.
SONYC-UST-V2 released
SONYC-UST-V2, the full dataset (inc. evaluation data) for DCASE Task 5 has been released on Zenodo.
DCASE Task 5 Results
DCASE Task 5 results have been posted. Thanks to all the teams that participated in the challenge!
Featured in NYTimes Article
We tracked the change in New York City noise during lockdown, and the work was featured in the NYTime’s Upshot.
Joining NJIT
I’m excited to announce that I will be joining New Jersey Institute of Technology as an Assistant Professor in the Department of Informatics in January 2021....
New NPR Story
I was interviewed again on NPR’s Here and Now! This time I was part of a larger piece regarding the uptick in online citizen science volunteering during the ...
Announcing DCASE Challenge 2020 Task 5
I’m excited to announce Task 5 of the DCASE 2020 Challenge: Urban Sound Tagging with Spatiotemporal Context. This task aims to investigate how spatiotempora...