-
Notifications
You must be signed in to change notification settings - Fork 1
/
index.qmd
305 lines (219 loc) · 17.2 KB
/
index.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
---
title: The D.A.R.E. Workshop
subtitle: |
as a part of the [AAAI ICWSM 2024 conference](https://www.icwsm.org/2024/) at Buffalo, USA
description: |
Disrupt, Ally, Resist, Embrace (DARE):<br> Action Items for Computational Social Scientists in a Changing World
section-divs: false
toc: false
title-block-banner: "#008794"
title-block-banner-color: "white"
---
:::{.white .center .big}
A gathering to discuss the emerging dilemmas around the principles and practice of computational social science research in a changing technological landscape.
:::
:::{.center}
**New participation format "Behind the Scenes" announced**
:::
## Motivation
:::{.main .normal}
Many of the contemporary issues affecting computational social scientists are related to both, *processes of* and the *ethical principles underlying* computational social science research, which are often, and repeatedly, disrupted by platform politics, new technologies, their implications, and their unknowns. For instance, the increasingly turbulent techno-political online environment has seen a few key developments that have affected the scope and characteristics of computational social science research centered on social media. The global pandemic, a looming climate change crisis, violent populist events such as Jan 6, 2021 (first ever attack on the U.S. Capitol), with a repeat on Jan 8, 2023, in Brazil (a copycat attack on Brazil’s Praça dos Três Poderes), the almost two-year long war, the attack on Israel by Hamas and the subsequent armed conflict in Gaza; and ubiquitous conspiracy theories surrounding everything have spurred more discussions around access, inclusivity, privilege, and propaganda, than ever before.
Furthermore, these real-world events have been accompanied (and often closely entangled) with technological changes in the online world: the rise of TikTok and the fall of Facebook, Twitter’s takeover by Elon Musk, and new AI technologies (Open AI’s ChatGPT, DALL-E, Stable Diffusion, GitHub CoPilot, etc.). Especially LLMs (Large Language Models) and their applications are being widely discussed in academia and media as potential "disruptors" of scientific integrity, especially the jobs of knowledge workers.
As computational social scientists, it is important not simply to study these events, but also to talk about them in light of our roles as the creators and stewards of knowledge. How, then, should the ICWSM community members react to these disruptions? Which disruptions should they embrace and which ones should they resist? Whom do they ally with, and for what purpose? These are not philosophical questions any longer. They are real and they need to be addressed.
:::
## Participate
:::{.white .normal}
In order to participate in the workshop, we invite **three** types of submissions:
1. Short, 200-word statements of interest that express a desire to participate in the workshop discussion, by positioning oneself with respect to the issues discussed below.
2. 2-5 page (in AAAI format) extended abstracts that detail one's position on one of the issues discussed below. Accepted abstracts will appear in the workshop proceedings.
:::{.highlight}
3. "Behind the Scenes" insights on a recent paper (2-pages in AAAI format)
:::
All types of submissions can be made at the submission portal on Easychair. Given limited space, preference will be given to those who submitted extended abstracts.
The position papers should, at a high level, address concerns with processes of and the principles underlying computational social science research, and how they are often, and repeatedly, disrupted by platform politics, new technologies, their implications, and their unknowables. By <b>problems of process</b>, we refer to, for example, the fact that the increasing availability of proprietary AI tools has created challenges for the research process. With respect to <b>issues of principle</b>, we refer to the fact that recent events related to questionable technology takeovers and layoffs, exposes on techno-political alliances, and questionable labor practices at large technology companies create new dilemmas for researchers collecting, annotating, and analyzing online data.
Position papers should be grounded in evidence, prior published work, and ideally, also personal experiences. Examples of the position papers we seek can be found [here](https://www.nature.com/articles/d41586-023-00288-7), [here](https://www.nature.com/articles/d41586-022-03791-5), [here](https://www.nature.com/articles/d41586-022-03294-3), or they can be responses to news stories like [this one](https://www.wired.com/story/twitters-api-crackdown-will-hit-more-than-just-bots/). Ideally, position papers should respond to the provided prompts below (i.e., concerns about processess and principles), although we will also consider papers that do not explicitly respond to a prompt, but discuss an interesting and relevant problem pertinent to this discussion.
:::{.highlight}
The "Behind the Scenes" segment aims to shed light on
the hurdles researchers face but seldom discuss in their final papers. In particular, we are looking fo submissions that touch upon issues regarding the themes outlined in detail in the next section.
We encourage submissions from individuals and teams willing to share their experiences, including but not limited to, overcoming obstacles in data collection, navigating API limitations or costs, addressing reproducibility issues, and tackling the complexities of working with LLMs. Whether it was a struggle with dataset accessibility, an unexpected hiccup in model performance, or a creative workaround to a common problem, your insights can provide immense value to the community. This is an opportunity to discuss the often-unseen aspects of research that can significantly impact the outcome and interpretation of your work.
By sharing these experiences, we hope to foster a more transparent, collaborative, and supportive research environment, enabling us to collectively tackle the complexities of modern computational social science research.
:::
### Themes
::: {.panel-tabset}
## Issues of Process
The increasing availability of proprietary AI tools has created challenges for the research process as well as the researchers themselves -- to adapt or be left behind.
**Reproducibility:**: First, we can consider their impact on the research process. While preregistrations and transparency checklists offer promising directions for clarifying contentious issues in the research process, the use of technology to collect, process, and even create data (such as is possible through the use of LLMs) may imply that they can do little to ameliorate the reproducibility crisis plaguing social science today. Is the peer review process robust to these onslaughts? What can be done to establish the credibility and validity of published research? One of the workshop organizers, David Schoch, recently published [a piece](https://arxiv.org/abs/2307.01918) on these issues.
**Embracing or Resisting new tools**: Second, we can consider their impact on researchers. On the one hand, they may spur opportunities to study known problems differently; on the other, they may trigger the study of the new problems they herald (or bring to light). New technologies and tools may also spell unfavorable consequences for those who lack the computational resources to use them, who may even consider pivoting their research directions and methods. Who can use these tools, and what would this imply for those who can not or may not? Related to the use of AI tools for doing research is the issue of the use of AI tools for evaluating research in processes such as peer-reviewing at conferences or grant agencies. While some grant agencies (for example, the NSF) have published [their guidelines](https://new.nsf.gov/news/notice-to-the-research-community-on-ai), others are learning the lessons the hard way (see allegations of use of AI for merit [review for Australia's Research Council](https://www.researchprofessionalnews.com/rr-news-australia-government-agencies-2023-6-furore-over-use-of-ai-to-assess-research-proposals/)).
## Issues of Principle
Recent events related to questionable technology takeovers and layoffs, exposes on techno-political alliances, and questionable labor practices at large technology companies create new dilemmas for researchers collecting, annotating, and analyzing online data. These problems now extend to the dilemma of using methods and models that are the offshoots of anti-consumer corporate practices. For instance, large language models, or LLMs, are owned by private companies, but they were built by ingesting the entirety of human-produced text available on the internet without attribution. What does it mean to make privately-owned LLMs central to one’s research? Are LLMs (and other generative models) the same as code libraries? Or something else entirely? Do they create dilemmas for ``conscientious objectors'', who might refuse to use such tools on ethical grounds? What would this mean for their research prospects at journals and other publication venues?
One can add to above-mentioned also the issue of truthfulness. LLMs are notorious for ``hallucinating'' or making stuff up, but not all researchers or the public at large are aware of the extent of this feature. Here is an example of researchers relying on Google's Bard to create case studies, which [turned out to be false](https://www.theguardian.com/business/2023/nov/02/australian-academics-apologise-for-false-ai-generated-allegations-against-big-four-consultancy-firms). What obligations do researchers have to expose such harms and educate the public to be cautious?
## Issues of Access
ICWSM researchers have for many years relied on APIs and large data collections from Twitter or Reddit. As both these platforms have severely restricted access to their data, researchers have started to look for other ways to study sociotechnical phenomena on these platforms or the web at large such as data donation from users, Internet observatories, simulation of human-like behaviors, etc. Similarly, given the high costs related to training or fine-tuning LLMs and similar AI models, there is a growing ecosystem for sharing models or data labeled via such models. These open up opportunities for allyship between organizations with a varying degree of resources to access data and models. How to raise awareness about such opportunities? How to create structures for formalizing such alliances?
:::
:::
:::{.white .center}
[Submit](https://easychair.org/my/conference?conf=dare24){.button}
:::
:::{.center}
### Important Dates
:::
- Workshop Papers Submissions: March 31st
- Workshop Paper Acceptance Notification: April 14th
- Workshop Final Camera-Ready Paper Due: May 5th
- ICWSM-2024 Workshops Day: June 3rd
## Program
This is a full-day workshop. Our goal is to provide a venue in which participants will engage with the listed issues through various formats: keynote speakers and panels, collaborative debate-style breakouts, and lightning sessions.
Depending on whether participants (those who are invited or who will apply to participate) are able to present in person or remotely, we will create a program to take into account how to best serve both audiences.
The tentative schedule can be found below. (Buffalo local time, EDT/GMT-4)
<table class="table table-sm table-striped">
<colgroup>
<col style="width: 15%">
<col style="width: 44%">
<col style="width: 40%">
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">Time</th>
<th style="text-align: left;">Description</th>
<th style="text-align: left;">Speaker</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;">8:30 - 09:00am</td>
<td style="text-align: left;">Coffee</td>
<td style="text-align: left;"></td>
</tr>
<tr class="even">
<td style="text-align: left;">09:00 - 09:15am</td>
<td style="text-align: left;">Introduction</td>
<td style="text-align: left;"></td>
</tr>
<tr class="odd">
<td style="text-align: left;">09:15 - 10:15am</td>
<td style="text-align: left;">Panel: More data is not the answer/Avoiding integrity crises in research</td>
<td style="text-align: left;">Orestis Papakyriakopoulos, Helena Webb, moderated by Ella Haig</td>
</tr>
<tr class="even">
<td style="text-align: left;">10:15 - 10:50am</td>
<td style="text-align: left;">Replicability of CSS</td>
<td style="text-align: left;">Chung-hong Chan</td>
</tr>
<tr class="odd">
<td style="text-align: left;">10:50 - 11:10am</td>
<td style="text-align: left;">Coffee Break</td>
<td style="text-align: left;"></td>
</tr>
<tr class="even">
<td style="text-align: left;">11:10 - 12:45am</td>
<td style="text-align: left;">Paper Session</td>
<td style="text-align: left;">
1. Chaitya Shah - <em>Can Social Media Platforms Transcend Political Labels? An Analysis of Neutral Conservations on Truth Social
</em> <a href="paper/2024_03.pdf">Link</a><br><br>
2. Avi Rosenfeld - <em>Fighting Bias in the 2023-2024 Hamas-Israel Conflict</em> <a href="paper/2024_02.pdf">Link</a><br><br>
3. Ke Zhou - <em>How Western, Educated, Industrialized, Rich, and Democratic is Social
Computing Research?</em> <a href="paper/2024_05.pdf">Link</a><br><br>
4. Fatima Zahrah, Jason R.C. Nurse - <em>Embedding Privacy in Computational Social Science and Artificial Intelligence Research </em> <a href="paper/2024_04.pdf">Link</a><br><br>
</td>
</tr>
<tr class="odd">
<td style="text-align: left;">12:45 - 2:00pm</td>
<td style="text-align: left;">Joint Lunch Break with Data Challenge Workshop</td>
<td style="text-align: left;"></td>
</tr>
</tbody>
</table>
The afternoon are joint (offline) sessions with the Data Challenge Workshop. More details will follow.
<table class="table table-sm table-striped">
<colgroup>
<col style="width: 15%">
<col style="width: 44%">
<col style="width: 40%">
</colgroup>
<thead>
<tr class="header">
<th style="text-align: left;">Time</th>
<th style="text-align: left;">Description</th>
<th style="text-align: left;">Speaker</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;">02:00 - 03:30pm</td>
<td style="text-align: left;">Panel</td>
<td style="text-align: left;"></td>
</tr>
<tr class="odd">
<td style="text-align: left;">03:30 - 04:00pm</td>
<td style="text-align: left;">Coffee Break</td>
<td style="text-align: left;"></td>
</tr>
<tr class="odd">
<td style="text-align: left;">04:00 - 05:30pm</td>
<td style="text-align: left;">Behind the scene lighting talks</td>
<td style="text-align: left;"></td>
</tr>
</tbody>
</table>
## Speakers
::: {.column width="33%" .center}
![](https://www.professoren.tum.de/fileadmin/w00bgr/www/pics/Papakyriakopoulos_Orestis.jpg){width="100%" height="auto"}
[Orestis Papakyriakopoulos](https://www.professoren.tum.de/papakyriakopoulos-orestis)
:::
::: {.column width="66%" .left}
::: {.panel-tabset}
### Bio
Orestis Papakyriakopoulos is a professor for societal computing. His research provides ideas, frameworks, and practical solutions towards just, inclusive and participatory socio-algorithmic ecosystems. He builds tools and performs foundational research on platforms and artificial intelligence. Orestis analyzes new and old media by the application of data-intensive algorithms, as well as the political and social impact of the use of data-intensive algorithms themselves.
### Topic
TBD
:::
:::
::: {.column width="33%" .center}
![](https://www.cs.ox.ac.uk/files/11015//HW%20photo%20summer%202018.jpg){width="100%" height="auto"}
[Helena Webb](https://www.nottingham.ac.uk/research/groups/ai/people/helena.webb)
:::
::: {.column width="66%" .left}
::: {.panel-tabset}
### Bio
Helena Webb is an Assistant Professor for AI at Nottingham. She is an experienced socio-technical researcher with expertise across responsible research and innovation (RRI), human-computer interaction (HCI), science and technology studies, and the sociology of technology. She is interested in the ways in which users interact with technologies in different kinds of setting and how social action both shapes and is shaped by innovation.
### Topic
TBD
:::
:::
::: {.column width="33%" .center}
![](https://upload.wikimedia.org/wikipedia/commons/a/a2/Person_Image_Placeholder.png){width="60%" height="auto"}
[Chung-hong Chan](https://www.gesis.org/institut/ueber-uns/mitarbeitendenverzeichnis/person/Chung-hong.Chan?no_cache=1)
:::
::: {.column width="66%" .left}
::: {.panel-tabset}
### Bio
Dr. Chung-hong Chan (PhD University of Hong Kong, 2018) is Senior Researcher in the Department of Computational Social Science, GESIS – Leibniz Institute for the Social Sciences, Cologne, Germany, and External Fellow at the Mannheim Center for European Social Research, University of Mannheim (Germany). An epidemiologist by training, he is interested in developing new quantitative methods for communication research.
### Topic
What makes computational communication science (ir)reproducible? ([paper](https://journal.computationalcommunication.org/article/view/5926))
:::
:::
:::{.main}
## Organizers
::: {.columns}
::: {.column width="25%" .center}
![](img/organizers/eni.png)
Eni Mustafaraj
[Wellesley College]{.normal}
:::
::: {.column width="25%" .center}
![](img/organizers/david.png)
David Schoch
[GESIS]{.normal}
:::
::: {.column width="25%" .center}
![](img/organizers/ella.jpg)
Ella Haig
[University of Portsmouth]{.normal}
:::
::: {.column width="25%" .center}
![](img/organizers/jason.jpg)
Jason Nurse
[University of Kent]{.normal}
:::
:::
:::