-
Notifications
You must be signed in to change notification settings - Fork 6
Expand file tree
/
Copy pathkeynotes.html
More file actions
317 lines (295 loc) · 23.1 KB
/
keynotes.html
File metadata and controls
317 lines (295 loc) · 23.1 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
<!DOCTYPE HTML>
<!--
Strongly Typed by HTML5 UP
html5up.net | @ajlkn
Free for personal and commercial use under the CCA 3.0 license (html5up.net/license)
-->
<html>
<head>
<title>SyntaxFest 2025 | Keynotes</title>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no" />
<link rel="stylesheet" href="assets/css/main.css" />
<link rel="icon" type="image/x-icon" href="images/favicon.ico">
<style>
p {
clear: both;
}
p img {
margin-right: 10px; /* Space between the image and the text */
}
.speaker-container {
display: flex;
align-items: flex-start;
gap: 20px; /* Space between image and text */
}
.speaker-container img {
flex-shrink: 0;
width: 180px; /* Adjust based on desired image size */
height: auto; /* Maintain aspect ratio */
border-radius: 8px; /* Optional: rounded corners */
}
.speaker-details {
flex-grow: 1;
max-width: 100%;
}
.speaker-name {
font-weight: bold;
font-size: 18px;
}
.speaker-affiliation {
font-style: italic;
color: #555;
/*margin-bottom: 10px;*/
}
.presentation-date {
/*color: #555;*/
margin-bottom: 20px;
}
.speaker-bio {
margin-top: 15px;
font-size: 16px;
line-height: 1.6;
color: #333;
}
.talk-summary {
margin-top: 10px;
padding: 10px 15px;
background-color: #f9f9f9;
font-size: 18px;
line-height: 1.5;
color: #222;
}
.section-title {
margin-top: 20px;
margin-bottom: 5px;
font-size: 17px;
font-weight: bold;
color: #333;
letter-spacing: 0.5px;
}
/* Responsive adjustments */
@media (max-width: 768px) {
.speaker-container {
flex-direction: column; /* Stack image above text on smaller screens */
align-items: center; /* Center image */
}
.speaker-container img {
width: 60%; /* Make image smaller on mobile */
max-width: 250px; /* Prevent it from getting too big */
}
.speaker-details {
text-align: center; /* Center-align text for better readability */
}
}
</style>
</head>
<body class="no-sidebar is-preload">
<div id="page-wrapper">
<!-- Header -->
<section id="header">
<div class="container">
<!-- Nav -->
<nav id="nav" style="font-size: 22px;">
<ul>
<li><a href="index.html"><span>Home</span></a></li>
<li class="has-dropdown">
<a href="calls.html"><span>Calls</span></a>
<ul class="dropotron">
<li style="white-space: nowrap;"><a href="cfp_tlt.html" style="display: block;">TLT</a></li>
<li style="white-space: nowrap;"><a href="cfp_depling.html" style="display: block;">DepLing</a></li>
<li style="white-space: nowrap;"><a href="cfp_udw.html" style="display: block;">UDW</a></li>
<li style="white-space: nowrap;"><a href="cfp_iwpt.html" style="display: block;">IWPT</a></li>
<li style="white-space: nowrap;"><a href="cfp_quasy.html" style="display: block;">QUASY</a></li>
</ul>
</li>
<li><a href="committees.html"><span>Committees</span></a></li>
<li class="has-dropdown">
<a href="programme.html"><span>Programme</span></a>
<ul class="dropotron">
<li style="white-space: nowrap;"><a href="programme.html" style="display: block;">Schedule</a></li>
<li style="white-space: nowrap;"><a href="keynotes.html" style="display: block;">Keynote Speakers</a></li>
<li style="white-space: nowrap;"><a href="author_instructions.html" style="display: block;">Author Instructions</a></li>
<li style="white-space: nowrap;"><a href="social_events.html" style="display: block;">Social Events</a></li>
<li style="white-space: nowrap;"><a href="photo_highlights.html" style="display: block;">Photo Highlights</a></li>
</ul>
</li>
<li><a href="registration.html"><span>Registration</span></a></li>
<li class="has-dropdown">
<a href="venue.html"><span>Location</span></a>
<ul class="dropotron">
<li style="white-space: nowrap;"><a href="venue.html" style="display: block;">Venue</a></li>
<li style="white-space: nowrap;"><a href="lunch.html" style="display: block;">Lunch</a></li>
<li style="white-space: nowrap;"><a href="travel.html" style="display: block;">Travel</a></li>
<li style="white-space: nowrap;"><a href="accommodation.html" style="display: block;">Accommodation</a></li>
<li style="white-space: nowrap;"><a href="Slovenia.html" style="display: block;">Slovenia</a></li>
<li style="white-space: nowrap;"><a href="recommendations.html" style="display: block;">Recommendations</a></li>
<li style="white-space: nowrap;"><a href="visa.html" style="display: block;">Visa</a></li>
</ul>
</li>
<li><a href="https://syntaxfest.github.io/" target="_blank"><span>Past events</span></a></li>
</ul>
</nav>
</div>
</section>
<!-- Main -->
<section id="main">
<div class="container">
<div id="content">
<!-- Post -->
<article class="box post">
<header>
<h2>Keynote Speakers</h2>
</header>
<h3 style="margin-top: 20pt;" id="IWPT">International Confenernce on Parsing Technologies (IWPT)</h3>
<div class="speaker-container">
<img style="width: 233px;" src="images/keynotes/Papadimitriou.jpg" alt="Image of Isabel Papadimitriou"/>
<div class="speaker-details">
<div class="speaker-name"><a href="https://www.isabelpapad.com/" target="_blank">Isabel Papadimitriou</a></div>
<div class="speaker-affiliation">Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University</div>
<div class="presentation-date">Tuesday, 26. August 2025 <br> <a href="https://www.youtube.com/watch?v=nB7BeOuHi_I" target="_blank">[Video link]</a> </div>
<div class="section-title">Title:</div>
<div class="talk-summary">
<strong>What can we learn from language models?</strong>
</div>
<div class="section-title">Abstract:</div>
<div class="talk-summary">
This talk will examine how linguistic theory can benefit from the recent surprising successes of language models in modeling human language production. Language models provide linguists with an unprecedented empirical tool to expand and test our theoretical hypotheses about language. I will go over two main methodologies for taking advantage of language models as an empirical tool. Firstly, examining language model internals as functional theories for how linguistic information can be represented in ways that lead to linguistic capabilities. Secondly, using model training as an empirical testbed, examining what kinds of environments make statistical language learning possible or harder. Both methodologies showcase the importance of developing empirical paradigms that narrow the gap between computational methods and linguistic concerns in order to make language models able to help us expand theoretical horizons.
</div>
<div class="section-title">Bio:</div>
<div class="talk-summary">
Isabel Papadimitriou is a Kempner Fellow at the Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard, and incoming as an assistant professor of linguistics at the University of British Columbia. She is interested in analyzing how large language models learn and represent abstract structural systems, and in how experiments on language models can help enrich the hypothesis space around what makes the learning and representation of language possible.
</div>
</div>
</div>
<h3 style="margin-top: 40pt;" id="UDW">Universal Dependencies Workshop (UDW)</h3>
<div class="speaker-container">
<img style="width: 233px;" src="images/keynotes/deLhoneux.jpg" alt="Image of Miryam de Lhoneux"/>
<div class="speaker-details">
<div class="speaker-name"><a href="https://people.cs.kuleuven.be/~miryam.delhoneux/" target="_blank">Miryam de Lhoneux</a></div>
<div class="speaker-affiliation">KU Leuven, Belgium</div>
<div class="presentation-date">Wednesday, 27. August 2025 <br> <a href="https://www.youtube.com/watch?v=sv8XnAYMQvQ&t=1s" target="_blank">[Video link]</a></div>
<div class="section-title">Title:</div>
<div class="talk-summary">
<strong>Typologically informed NLP evaluation</strong>
</div>
<div class="section-title">Abstract:</div>
<div class="talk-summary">
NLP has a long history of focusing mainly on English. While increasing efforts are being made towards making language technology more multilingual, English remains the language on which NLP technology is developed first, and applied to other languages next, which inevitably leads to degraded performance compared to English. This talk is about reversing this trend and putting multilinguality at the core of NLP, rather than at the periphery. I describe how typology can inform NLP evaluation, using our recently proposed language sampling framework. A strong limitation of the approach is the state of multilingual datasets, which tend to lack coverage, be machine-translated or have questionable quality. UD is an exception, and I emphasize the role it can play in establishing best practices in multilingual NLP evaluation.
</div>
<div class="section-title">Bio:</div>
<div class="talk-summary">
Miryam de Lhoneux is an assistant professor in the department of Computer Science at KU Leuven in Belgium, researching and teaching Natural Language Processing. She heads the LAGoM NLP lab where the focus is on multilingual and interpretable models. Previously, she was a postdoc at the University of Copenhagen in Denmark. She has a PhD from Uppsala University, Sweden, an MSc in Artificial Intelligence from the University of Edinburgh, UK, and a BA and MA in languages and literatures from UCLouvain, Belgium.
</div>
</div>
</div>
<h3 style="margin-top: 40pt;" id="DepLing">International Conference on Dependency Linguistics (DepLing)</h3>
<div class="speaker-container">
<img style="width: 233px;" src="images/keynotes/Zeman2.jpg" alt="Image of Daniel Zeman"/>
<div class="speaker-details">
<div class="speaker-name"><a href="https://ufal.mff.cuni.cz/daniel-zeman" target="_blank">Daniel Zeman</a></div>
<div class="speaker-affiliation">Charles University, Prague</div>
<div class="presentation-date">Thursday, 28. August 2025 <br> <a href="https://www.youtube.com/watch?v=NPscoUxMBx0" target="_blank">[Video link]</a></div>
<div class="section-title">Title:</div>
<div class="talk-summary">
<strong>Auxiliaries across Languages and Frameworks</strong>
</div>
<div class="section-title">Abstract:</div>
<div class="talk-summary">
In my talk, I will discuss the status of auxiliaries (i.e., auxiliary verbs as well as uninflected non-verbal particles with auxiliary function) in dependency treebanks. The focus will be on two frameworks, Universal Dependencies (UD) and the Prague family of treebanks, rooted in the Functional Generative Description. However, I will occasionally show examples from other treebanks and frameworks, encountered during the HamleDT harmonization effort. <br><br>
Besides looking at various treatments of auxiliaries in different annotation schemes, I will also discuss the question of delimiting the set of auxiliaries in individual languages (or, more exactly, the set of words that receive the special treatment in the respective annotation schemes). Various grammatical tests may be available, but sometimes the auxiliaries are simply enumerated by traditional school grammar. Moreover, there is a scale of categories ranging from pure grammatical auxiliaries through modals and phase verbs to various semantically bleached verbs that take other verbs as complements, yet their contribution is lexical rather than grammatical and their syntactic behavior shows no anomalies. All these aspects complicate finding a unified definition that would be applicable in a multi-lingual dataset, such as HamleDT or UD.<br><br>
In the last part of the talk, I will show some examples of contrastive cross-linguistic studies that would benefit from comparably defined auxiliaries. I will show how we encourage comparability in UD using a common database of auxiliaries, and I will argue that it has the potential to become a useful resource of its own.
</div>
<div class="section-title">Bio:</div>
<div class="talk-summary">
Daniel Zeman is an associate professor of computational linguistics at the Charles University in Prague. He obtained his PhD (also from Charles University) in 2005 with a dissertation on statistical methods for syntactic parsing of Czech. He then worked on cross-lingual transfer techniques for low-resource languages, and led several projects focused on multilingual NLP and harmonization of linguistic resources, including Interset (for morphological tagsets) and HamleDT (for dependency treebanks). He is one of the founders and leading personalities of the Universal Dependencies initiative, and vice-chair of the COST Action “Universality, Diversity and Idiosyncrasy in Language Technology” (UniDive). His current work extends to harmonized datasets for coreference resolution (CorefUD) and Uniform Meaning Representation (UMR).
</div>
</div>
</div>
<h3 style="margin-top: 40pt;" id="TLT">Workshop on Treebanks and Linguistic Theories (TLT)</h3>
<div class="speaker-container">
<img style="width: 233px;" src="images/keynotes/Zeldes.png" alt="Image of Amir Zeldes"/>
<div class="speaker-details">
<div class="speaker-name"><a href="https://gucorpling.org/amir/" target="_blank">Amir Zeldes</a></div>
<div class="speaker-affiliation">Georgetown University</div>
<div class="presentation-date">Thursday, 28. August 2025 <br> <a href="https://www.youtube.com/watch?v=ur8PHtyM8f0" target="_blank">[Video link]</a></div>
<div class="section-title">Title:</div>
<div class="talk-summary">
<strong>Subject prominence revisited: What makes entities salient?</strong>
</div>
<div class="section-title">Abstract:</div>
<div class="talk-summary">
In this talk, I’ll explore what makes certain entities stand out in discourse — what we might call more or less “salient” — and how speakers systematically identify them. Building on existing approaches to information structural “aboutness”, subjecthood, Centering Theory and animacy hierarchies, I argue that salience goes beyond surface categories such as definiteness, pronominalization and grammatical function. It’s also shaped by deeper structures: distributional cues, discourse relations, hierarchical organization, genre conventions, and the communicative goals we infer from context. To get at this, I use a graded notion of salience based on how often entities are included in multiple human-written summaries of a text or conversation. Drawing on manually treebanked data from 24 different spoken and written genres in English, I ask: how is salience expressed for each entity mentioned in a discourse? I’ll show that while traditional linguistic markers of salience all correlate with our salience scores to some extent, every rule has exceptions, and no single feature tells the whole story. Instead, salience cuts across all levels of linguistic structure, and the most informative theoretical model of the phenomenon must therefore combine cues from across morphosyntax, discourse structure, and functional pragmatics.
</div>
<div class="section-title">Bio:</div>
<div class="talk-summary">
Amir Zeldes is Associate Professor of Computational Linguistics at Georgetown University, where he runs the Georgetown University Corpus Linguistics lab, <a href="https://gucorpling.org/corpling/" target="_blank">Corpling@GU</a>. He has worked on multilayer treebank construction and evaluation, including development of the Georgetown University Multilayer corpus (<a href="https://gucorpling.org/gum/" target="_blank">GUM</a>) and datasets for low resource languages, such as the UD Coptic Treebank. His main area of research is computational discourse modeling, working on frameworks such as Enhanced Rhetorical Structure Theory (<a href="https://gucorpling.org/erst/" target="_blank">eRST</a>) and Graded Salience, as well as topics such as coreference resolution, genre variation and summarization. He is currently president of the ACL Special Interest Group on Annotation (<a href="https://sigann.github.io/" target="_blank">SIGANN</a>).
</div>
</div>
</div>
<h3 style="margin-top: 40pt;" id="QUASY">Workshop on Quantitative Syntax (QUASY)</h3>
<div class="speaker-container">
<img style="width: 233px;" src="images/keynotes/Lu.jpeg" alt="Image of Xiaofei Lu"/>
<div class="speaker-details">
<div class="speaker-name"><a href="https://sites.psu.edu/xxl13/" target="_blank">Xiaofei Lu</a></div>
<div class="speaker-affiliation">The Pennsylvania State University</div>
<div class="presentation-date">Friday, 29. August 2025 <br> <a href="https://www.youtube.com/watch?v=V7Z9QPYM-BM" target="_blank">[Video link]</a></div>
<div class="section-title">Title:</div>
<div class="talk-summary">
<strong>The rhetorical and pragmatic functions of syntactically complex structures in academic and second language writing</strong>
</div>
<div class="section-title">Abstract:</div>
<div class="talk-summary">
Previous studies of linguistic complexity in academic and second language (L2) writing has often focused on quantitative differences across different writer groups and/or longitudinal changes over time, without systematic attention to the rhetorical or pragmatic functions that complex forms are used to convey. This talk argues for the importance of and delineates the scope of the function dimension of linguistic complexity analysis in L2 writing research, reviews the methods and findings of emerging efforts on this dimension, and discusses how future L2 writing research could attend to this dimension.
</div>
<div class="section-title">Bio:</div>
<div class="talk-summary">
Xiaofei Lu is the George C. and Jane G. Greer Professor of Applied Linguistics and Asian Studies at The Pennsylvania State University. His research has long centered on computational and quantitative analyses of linguistic complexity in reading materials, second language production, and academic writing. His current work explores mappings between linguistic forms and rhetorical/pragmatic functions in language production and sense-aware measurements of linguistic complexity that account for the specific meanings of polysemous linguistic forms in context. He has published over 90 peer-reviewed articles in leading journals, including <em>Applied Linguistics, Behavior Research Methods, Computer Assisted Language Learning, Language Learning, Studies in Second Language Acquisition, TESOL Quarterly</em>, and <em>The Modern Language Journal</em>. He received the 2023 Ken Hyland Best Paper Award from <em>the Journal of English for Academic Purposes</em>. His latest book, <em>Corpus Linguistics and Second Language Acquisition: Perspective, Issues, and Findings</em>, was published by Routledge in 2023.
</div>
</div>
</div>
<h3 style="margin-top: 40pt;" id="Local">Local Keynote – Cancelled</h3>
<div class="speaker-container">
<img style="width: 233px;" src="images/keynotes/Stepanov.jpg" alt="Image of Artur Stepanov"/>
<div class="speaker-details">
<div class="speaker-name"><a href="https://ung.si/en/directory/30222/arthur-stepanov/" target="_blank">Artur Stepanov</a></div>
<div class="speaker-affiliation">University of Nova Gorica</div>
<div class="presentation-date"><strong style="color: red;">CANCELLED</strong></div>
<div class="section-title">Title:</div>
<div class="talk-summary">
<strong>What we learn about syntax when dependencies fail: Experimental insights into syntactic locality constraints</strong>
</div>
<div class="section-title">Abstract:</div>
<div class="talk-summary">
This talk examines a class of syntactic dependencies that cannot be formed: classic island violations (extraction from adjuncts, complex NPs, wh-islands etc.). I survey psycho- and neurolinguistic evidence quantifying the cognitive cost of breaching locality constraints, showing how these findings expose limits on dependency formation that remain invisible in standard treebanks yet are central to real-time sentence processing. I consider implications for parsing, dependency representations, and cross-linguistic variation, with suggestions for incorporating experimental diagnostics into syntactic annotation and parser-evaluation frameworks.
</div>
<div class="section-title">Bio:</div>
<div class="talk-summary">
Artur Stepanov is a professor of psycholinguistics at the University of Nova Gorica. His work focuses on the cognitive representation and real-time processing of syntactic dependencies in monolingual and multilingual speakers, exploring how internal grammatical competence maps onto observable linguistic behavior. He combines psycholinguistic experimentation with insights from generative syntax, with particular emphasis on lesser-studied Slavic languages. He is involved in multiple international collaborations on projects related to sentence comprehension and production, the linguistic and cognitive dimensions of multilingualism, and, more recently, the compositional aspects of animal (marine mammal) vocalization sequences.
</div>
</div>
</div>
</article>
</div>
</div>
</section>
<!-- Footer -->
<section id="footer">
<div id="copyright" class="container">
<ul class="links">
<li><a href="https://x.com/syntaxfest2025" target="_blank"><i class="fab fa-twitter-square"></i> SyntaxFest2025</a></li>
<li>Contact: syntaxfest2025@ff.uni-lj.si</li>
<li>Design based on Strongly Typed from <a href="https://html5up.net" target="_blank">HTML5 UP</a></li>
</ul>
</div>
</section>
</div>
<!-- Scripts -->
<script src="assets/js/jquery.min.js"></script>
<script src="assets/js/jquery.dropotron.min.js"></script>
<script src="assets/js/browser.min.js"></script>
<script src="assets/js/breakpoints.min.js"></script>
<script src="assets/js/util.js"></script>
<script src="assets/js/main.js"></script>
</body>
</html>