Cognitive Computing in biological and artificial systems @ TU Berlin
Our group’s goal is to deepen the connection between artificial intelligence, data science, and neuroscience in research and teaching. We develop computational models of human brain responses that were acquired under ecologically valid conditions. By exploring language representation, communication, and cognition in biological and artificial systems, we aim to enhance our understanding of the neural and computational bases of brain processes under ecologically valid conditions. We thereby aim to understand and optimize artificial neural language models by integrating insights from brain research to ultimately expand both the explainability of artificial systems and our understanding of the human brain.
Research
Denizens are interested in understanding how complex information is encoded in the brain and in artificial neural networks. We use machine-learning approaches to fit computational models to large-scale brain data acquired during natural tasks (e.g. reading a book, listening to a story, real-world conversation, watching movie) and study the correspondence between artificial neural network representations to brain representations. Currently, we explore how more than one language can coexist in the human brain.
Publications
Reproducible Research
Book
Case Studies
Book
Chen, et al. 2024
Nature Communications Biology
Elfaramawy et al 2024
Gesellschaft für Informatik e.V
Gong et al. 2023
Nature Communications
pyMooney
Book Chapter
The Practice of Reproducible Research: Case Studies in Data Science
Kitzes, J., Turek D., Deniz F., editors, The Practice of Reproducible Research: Case Studies in Data Science , UC Press, 2017.
Online version book available at https://www.practicereproducibleresearch.org/.
Introducing the Case Studies
Turek D., Deniz F., Introducing the Case Studies. In The Practice of Reproducible Research: Case Studies in Data Science. Kitzes, J., Turek D., Deniz F., editors. UC Press, 2017
Online version book available at https://www.practicereproducibleresearch.org/.
The cortical representation of language timescales is shared between reading and listening.
Fig. 1: Timescale selectivity across the cortical surface.
Voxelwise modeling was used to determine the timescale selectivity of each voxel, for reading and listening separately (see Methods for details). a Timescale selectivity during listening (x axis) vs reading (y axis) for one representative participant (S1). Each point represents one voxel that was significantly predicted in both modalities. Points are colored according to the mean of the timescale selectivity during reading and listening. Blue denotes selectivity for short timescales, green denotes selectivity for intermediate timescales, and red denotes selectivity for long timescales. Timescale selectivity is significantly positively correlated between the two modalities (r = 0.41, P < 0.001). Timescale selectivity was also significantly positively correlated in the other eight participants (S2: r = 0.58, S3: r = 0.44, S4: 0.34, S5: 0.47, S6: 0.35, S7: 0.40, S8: 0.49, S9: 0.52, P < 0.001 for each participant). b Timescale selectivity during reading and listening on the flattened cortical surface of S1. Timescale selectivity is shown according to the color scale at the bottom (same color scale as in (a)). Voxels that were not significantly predicted are shown in gray (one-sided permutation test, P < 0.05, FDR-corrected; LH left hemisphere, RH right hemisphere, NS not significant, PFC prefrontal cortex, MPC medial parietal cortex, EVC early visual cortex, AC auditory cortex). For both modalities, temporal cortex contains a spatial gradient from intermediate to long-timescale selectivity along the superior to the inferior axis, the prefrontal cortex (PFC) contains a spatial gradient from intermediate to long-timescale selectivity along the posterior to the anterior axis, and precuneus is predominantly selective for long timescales. c Timescale selectivity in eight other participants. The format is the same as in (b). d Prediction performance for linguistic features (i.e., timescale-specific feature spaces) vs. low-level sensory features (i.e., spectrotemporal and motion energy feature spaces) for S1. Orange voxels were well-predicted by low-level sensory features. Blue voxels were well-predicted by linguistic features. White voxels were well-predicted by both sets of features. Low-level sensory features predict well in early visual cortex (EVC) during reading, and in early auditory cortex (AC) during listening. Linguistic features predict well in similar areas for reading and listening. After early sensory processing, cortical timescale representations are consistent between reading and listening across temporal, parietal, and prefrontal cortices.
The cortical representation of language timescales is shared between reading and listening.
Elfaramawy N. Deniz F. Grunske L., Hilbrich M., Kehrer T., Lamprecht A-L., Mendling J., Weidlich M., On Managing Large Collections of Scientific Workflows. Modellierung 2024 Satellite Events. Gesellschaft für Informatik e.V.. RDiMOD. Potsdam. March 12-15
Fig. 1: The PopIns workflow [KMH16] for variant calling, taken from [El22]. Here, the circles denote programs that are executed on specific types of genomic data.
The cortical representation of language timescales is shared between reading and listening.
Fig. 3: Variance partitions for phonemic processing.
To identify phonemic representations across the cerebral cortex, a joint Phonemic VM consisting of single phoneme-, diphone- and triphone-based feature spaces was constructed. a Shows that the joint Phonemic VM produces accurate predictions of brain activity in LTC, LPC, MPC, IPFC and SPFC. To determine whether these representations were best modeled using single phoneme-, diphone-, or triphone-based features or the joint of these features, variance partitioning was used to identify how much of the variance in brain activity could be explained by models based on each of these three phoneme-related features and their joint pairs. b Shows that single phoneme features best explain response variance along STS. Diphone features best explain response variance in LTC, LPC, MPC, IPFC and SPFC. Triphone features and the joint of each pair of these phonemic features produce poor predictions in most voxels. Data used to generate this figure has been provided in source data. Cortical regions referred are: SPFC superior prefrontal cortex, IPFC inferior prefrontal cortex, MPC medial parietal cortex, LPC lateral parietal cortex, STS superior temporal sulcus, LTC lateral temporal cortex, MTC medial temporal cortex, VC visual cortex, sPMv ventral speech premotor area.
pyMooney: Generating a Database of Two-Tone, Mooney Images
Deniz F., pyMooney: Generating a Database of Two-Tone, Mooney Images. In The Practice of Reproducible Research: Case Studies in Data Science. Kitzes, J., Turek D., Deniz F., editors. UC Press, 2017
Online version book available at https://www.practicereproducibleresearch.org/.
Team Denizens
Meet our amazing team
Prof. Dr. Fatma Deniz
Principal Investigator
Mathis Lamarre
PhD student
Anuja Negi
PhD Student
Lea Musiolek
Research Assistant
Lily Xue Gong
Postdoctoral researcher
Subba Reddy Oota
Postdoctoral researcher
Prof. Dr. Fatma Deniz
Principal Investigator
Room: MAR 5.039
Phone: +49 (0)30/314-70459
Email: deniz@tu-berlin.de
Office hours: on appointment
Fatma Deniz is the Principal Investigator. Fatma routinely disregards disciplinary boundaries and follows her curiosity. This makes her work lie at the intersection between data science, neuroscience, and artificial intelligence. She designs and promotes approaches for scientific reproducibility and co-edited a book on reproducibility published here.Here is an online version of the book.
She also worked on improving online user authentication applications using image-based authentication procedures combined with knowledge gained from cognitive neuroscience (Check MooneyAuth ) and created a database of two-tone Mooney images. E-mail Fatma if you would like to use those for your research. Code is available on GitHub.
Fatma is a passionate coder, baker, and she loves feeling the resonance when playing the cello which makes her simply happy.
Curriculum Vitae
Education
2008-2013
Ph.D. (Dr. rer. nat.) in Computer Science; Dissertation: Visual Consciousness and Corticocortical Connectivity in the Human Brain
Technische Universität Berlin, Berlin Bernstein Center for Computational Neuroscience (BCCN) – Berlin, Deutschland
2007-2008
Diploma Thesis in Computational Neural Systems
California Institute of Technology (Caltech) – Pasadena, CA, USA
2001-2008
Diploma in Computer Science
Technische Universität München (TUM) – München, Germany
1997-2001
High school
Bursa Kiz Lisesi – Bursa, Türkiye
Work Experience
04/2023-present
Full Professor (W3) Computer Science, supported by the Berlin Equal Opportunities Program
Chair of Language and Communication in Biological and Artificial Systems Faculty of Electrical Engineering and Computer Science, Technische Universität Berlin – Berlin, Germany
04/2023-present
Member
Berlin Bernstein Center for Computational Neuroscience
2020-2023
Project Leader
Research project for NSF-BMBF CRCNS grant, “Language representations in bilinguals”
2018-2023
Assistant Project Scientist & Exceptional Principal Investigator
Helen Wills Neuroscience Institute, University of California – Berkeley, CA, USA
2014-2018
Data Science Fellow
Berkeley Institute for Data Science (BIDS), University of California – Berkeley, CA, USA
2016-2017
Lecturer, Undergraduate Division, Data Science Education Program
University of California – Berkeley, CA, USA
2013-2018
Postdoctoral Scholar
Helen Wills Neuroscience Institute, University of California –Berkeley, CA, USA International Computer Science Institute, Berkeley, USA