<schedule>
<version>413</version>
<conference>
<acronym>2025</acronym>
<title>Snap!Con 2025</title>
<start>2025-09-01</start>
<end>2025-09-03</end>
<days>3</days>
<timeslot_duration>00:05</timeslot_duration>
</conference>
<day date='2025-09-01' index='1'>
<room name='Plenary Room'>
<event guid='-kOxFs1wFXUv4gudJPex4A' id='838'>
<date>2025-09-01T12:00:00+02:00</date>
<start>10:00</start>
<duration>00:30</duration>
<room>Plenary Room</room>
<type>Coffee Break</type>
<language></language>
<slug>838-coffee-break</slug>
<title>Coffee Break</title>
<subtitle>Monday Morning</subtitle>
<track></track>
<abstract>Let&#39;s meet at the PH Aula for a coffee before the conference day starts.</abstract>
<description>Let&#39;s meet at the PH Aula for a coffee before the conference day starts.</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
</persons>
</event>
<event guid='GaafHMvnBlU9ctg7xA--Mw' id='850'>
<date>2025-09-01T12:30:00+02:00</date>
<start>10:30</start>
<duration>00:30</duration>
<room>Plenary Room</room>
<type>Plenary</type>
<language></language>
<slug>850-welcome-to-snap-con-2025</slug>
<title>Welcome to Snap!Con 2025</title>
<subtitle></subtitle>
<track></track>
<abstract>This is the official welcome session for Snap!Con 2025</abstract>
<description>This is the official welcome session for Snap!Con 2025</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
</persons>
</event>
<event guid='uivcn4r2AuO5jKUk7h5CVw' id='826'>
<date>2025-09-01T13:00:00+02:00</date>
<start>11:00</start>
<duration>01:30</duration>
<room>Plenary Room</room>
<type>Workshop</type>
<language></language>
<slug>826-thinking-about-coding-workshop</slug>
<title>Thinking About Coding - Workshop</title>
<subtitle>Introducing Computational Thinking Assistants in Snap! to Foster Metacognitive Engagement</subtitle>
<track></track>
<abstract>This workshop aims to offer a practical exploration of key insights and observations drawn from the research project developed by Francesco Ragazzini and Mariabeatrice Starace under the supervision of Prof [Ricci](http://www.unibo.it/sitoweb/a.ricci), and explores how Snap! can be transformed into a metacognitive tool for supporting the development of computational thinking beyond the act of coding itself.

Inspired by the spirit and ideas of S.Papert, our project leverages Snap! flexibility to shift the focus from simply coding to thinking about coding. We aim to empower learners to reflect on their creative processes and support teachers in guiding this reflection.
 
To this end, we developed a Snap! category: Computational Thinking Assistants (CTA). These are a set of blocks designed to support metacognitive engagement directly within the coding environment. Crucially, each CTA is accompanied by a three-column table detailing relevant competencies (skills), computational practices, and observable behaviors, providing a structured framework for reflection. The CTA toolkit currently includes six blocks, each associated with a colored paperclip-themed sprite and a specific computational thinking skill. These blocks can be triggered via sprite interaction or messaging and activate a basic conversational interface designed to simulate reflective questioning. Although currently rule-based, future iterations may explore AI-powered sentiment analysis to tailor interactions more effectively. At the moment, we have developed an initial prototype of the CTA paper clips and we are conducting the first round of user testing with students and teachers. All interactions with the CTAs are designed and implemented in Italian, but we can certainly translate them into English if our proposal will be accepted. 
The first prototype is available for viewing at the following links: [CTA1-2](http://snap.berkeley.edu/snap/snap.html#present:Username=med-2024-2025&amp;ProjectName=CTA_RagazziniStarace_Abstractor%26Generalizator), [CTA3-4](http://snap.berkeley.edu/snap/snap.html#present:Username=med-2024-2025&amp;ProjectName=CTA_RagazziniStarace_Iterator%26Quoter), [CTA5-6](http://snap.berkeley.edu/snap/snap.html#present:Username=med-2024-2025&amp;ProjectName=CTA_RagazziniStarace_Decomposer%26Debugger). 

Key challenges include ensuring access to hardware, engaging students who prefer creation over reflection, and encouraging teachers to see coding tools as cross-disciplinary, process-oriented supports rather than just technical resources. To address these, we are developing canvas-based templates and toolkits inspired by Service and Game Design, along with a student Learning Journal to help design, deliver, assess, and document meaningful computational thinking experiences.

This workshop offers a novel and practice-oriented perspective on metacognitive engagement in computing education. It highlights Snap!’s potential as a thinking tool by  providing educators with pedagogical, not just technical, support—like CTA blocks, design toolkits, and reflective frameworks—to develop computational thinking competencies in a variety of educational settings. By engaging directly with the research in an active format, participants will experience how coding environments can be transformed into spaces for critical thinking, creative iteration, and meaningful reflection.</abstract>
<description>This workshop aims to offer a practical exploration of key insights and observations drawn from the research project developed by Francesco Ragazzini and Mariabeatrice Starace under the supervision of Prof [Ricci](http://www.unibo.it/sitoweb/a.ricci), and explores how Snap! can be transformed into a metacognitive tool for supporting the development of computational thinking beyond the act of coding itself.

Inspired by the spirit and ideas of S.Papert, our project leverages Snap! flexibility to shift the focus from simply coding to thinking about coding. We aim to empower learners to reflect on their creative processes and support teachers in guiding this reflection.
 
To this end, we developed a Snap! category: Computational Thinking Assistants (CTA). These are a set of blocks designed to support metacognitive engagement directly within the coding environment. Crucially, each CTA is accompanied by a three-column table detailing relevant competencies (skills), computational practices, and observable behaviors, providing a structured framework for reflection. The CTA toolkit currently includes six blocks, each associated with a colored paperclip-themed sprite and a specific computational thinking skill. These blocks can be triggered via sprite interaction or messaging and activate a basic conversational interface designed to simulate reflective questioning. Although currently rule-based, future iterations may explore AI-powered sentiment analysis to tailor interactions more effectively. At the moment, we have developed an initial prototype of the CTA paper clips and we are conducting the first round of user testing with students and teachers. All interactions with the CTAs are designed and implemented in Italian, but we can certainly translate them into English if our proposal will be accepted. 
The first prototype is available for viewing at the following links: [CTA1-2](http://snap.berkeley.edu/snap/snap.html#present:Username=med-2024-2025&amp;ProjectName=CTA_RagazziniStarace_Abstractor%26Generalizator), [CTA3-4](http://snap.berkeley.edu/snap/snap.html#present:Username=med-2024-2025&amp;ProjectName=CTA_RagazziniStarace_Iterator%26Quoter), [CTA5-6](http://snap.berkeley.edu/snap/snap.html#present:Username=med-2024-2025&amp;ProjectName=CTA_RagazziniStarace_Decomposer%26Debugger). 

Key challenges include ensuring access to hardware, engaging students who prefer creation over reflection, and encouraging teachers to see coding tools as cross-disciplinary, process-oriented supports rather than just technical resources. To address these, we are developing canvas-based templates and toolkits inspired by Service and Game Design, along with a student Learning Journal to help design, deliver, assess, and document meaningful computational thinking experiences.

This workshop offers a novel and practice-oriented perspective on metacognitive engagement in computing education. It highlights Snap!’s potential as a thinking tool by  providing educators with pedagogical, not just technical, support—like CTA blocks, design toolkits, and reflective frameworks—to develop computational thinking competencies in a variety of educational settings. By engaging directly with the research in an active format, participants will experience how coding environments can be transformed into spaces for critical thinking, creative iteration, and meaningful reflection.</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='9939'>mariabeatrice starace</person>
</persons>
</event>
<event guid='ryNoG3UClVMAvJLLdz4scQ' id='844'>
<date>2025-09-01T14:30:00+02:00</date>
<start>12:30</start>
<duration>01:15</duration>
<room>Plenary Room</room>
<type>Food Break</type>
<language></language>
<slug>844-lunch-break</slug>
<title>Lunch Break</title>
<subtitle>Monday</subtitle>
<track></track>
<abstract>Go and find food somewhere, we&#39;ll meet again in 90 minutes for the next session :)</abstract>
<description>Go and find food somewhere, we&#39;ll meet again in 90 minutes for the next session :)</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
</persons>
</event>
</room>
<room name='Seminar Room 1'>
<event guid='UNSSfDNyWgF8srscGnPYAg' id='800'>
<date>2025-09-01T13:00:00+02:00</date>
<start>11:00</start>
<duration>01:30</duration>
<room>Seminar Room 1</room>
<type>Workshop</type>
<language></language>
<slug>800-physical-computing-with-microblocks-and-snap</slug>
<title>Physical Computing with MicroBlocks and Snap!</title>
<subtitle></subtitle>
<track></track>
<abstract>This hands-on workshop will introduce participants to physical computing using Snap! and [MicroBlocks](https://microblocks.fun/). By combining the powerful live-coding environment of MicroBlocks with Snap!, attendees will explore how to control real-world hardware using the MicroBlocks library for Snap!.

Participants will build simple but engaging projects using microcontrollers like the micro:bit, Calliope mini, CoCube, ESP32, Raspberry Pi Pico and others. No prior electronics experience is required—just curiosity and a desire to explore computing beyond the screen!</abstract>
<description>This hands-on workshop will introduce participants to physical computing using Snap! and [MicroBlocks](https://microblocks.fun/). By combining the powerful live-coding environment of MicroBlocks with Snap!, attendees will explore how to control real-world hardware using the MicroBlocks library for Snap!.

Participants will build simple but engaging projects using microcontrollers like the micro:bit, Calliope mini, CoCube, ESP32, Raspberry Pi Pico and others. No prior electronics experience is required—just curiosity and a desire to explore computing beyond the screen!</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='3364'>Peter Mathijssen</person>
</persons>
</event>
</room>
<room name='SAP Immersive Experience Studio'>
<event guid='RwhfezKgM_i5fVl9ASUPwA' id='859'>
<date>2025-09-01T15:45:00+02:00</date>
<start>13:45</start>
<duration>00:45</duration>
<room>SAP Immersive Experience Studio</room>
<type>Bus Transfer</type>
<language></language>
<slug>859-bus-transfer-to-sap</slug>
<title>Bus Transfer to SAP</title>
<subtitle></subtitle>
<track></track>
<abstract>Please meet us in front of the PH Building (Keplerstraße 87, 69120 Heidelberg) for the bus to our evening event at SAP.
The bus back to Heidelberg will leave around 21:45. 

In case you&#39;re getting lost, we&#39;ll meet at WDF21 (Hasso Plattner Ring 7, 69190 Walldorf).
</abstract>
<description>Please meet us in front of the PH Building (Keplerstraße 87, 69120 Heidelberg) for the bus to our evening event at SAP.
The bus back to Heidelberg will leave around 21:45. 

In case you&#39;re getting lost, we&#39;ll meet at WDF21 (Hasso Plattner Ring 7, 69190 Walldorf).
</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
</persons>
</event>
<event guid='ghhz4GecYs-qxfhJ_QxE9g' id='835'>
<date>2025-09-01T16:30:00+02:00</date>
<start>14:30</start>
<duration>00:15</duration>
<room>SAP Immersive Experience Studio</room>
<type>Talk</type>
<language></language>
<slug>835-after-snap4arduino</slug>
<title>After Snap4Arduino</title>
<subtitle></subtitle>
<track></track>
<abstract>## Snap4Arduino
For over ten years, Snap4Arduino has allowed us to connect Snap! to different boards (UNO, Nano, Mega, Leonardo, Micro, Due, 101, ESP8266, NodeMCU...) to obtain data from multiple sensors and control devices from our virtual projects made with Snap!
With the arrival of Microblocks, now we can apply our dynamic programming (in the Smalltalk way) directly inside our microcontrollers... and we can also easily connect them to Snap! Therefore, we can create different projects (using Snap! for the virtual world and Microblocks for the physical one) and build interactions between them.

However, we often still want to use simple boards that are not compatible with Microblocks (basically UNOs), or we simply prefer to use a single environment (Snap!) to create our projects and use actuators and sensors as &quot;direct&quot; complements to them. That is why it still makes sense to maintain this &quot;direct connection&quot; between Snap! and those classic boards.

## S4A Connector
We present the new extension **S4A Connector** to perform this task. **&quot;Snap! for All firmata boards Connector&quot;** wants to continue the work of S4A and Snap4Arduino and offer a build-in extension inside the official Snap itself.
It will no longer be necessary to maintain a special Snap! distro, nor install any plugin to our browser...
Directly from Snap! using a Chromium-based browser (Chrome, Edge...) we will connect to our &quot;firmata&quot; devices.

![S4A Connector basic blocks](http://creativelearninglab.click/img/basic.png &quot;S4A Connector basic blocks&quot;)

![S4A Connector advanced blocks](http://creativelearninglab.click/img/advanced.png &quot;S4A Connector advanced blocks&quot;)

## Snap Creative Learning Lab

And in order not to overload Snap! but continue to offer all those functionalities that schools need (a firmware uploader, libraries and templates for different hardware used in schools...) we also present a new web space **[Snap! Creative Learning Lab](https://snap.creativelearninglab.click/?redirect=0&amp;lang=en)**

![Snap Creative Learning Lab](http://creativelearninglab.click/img/scllcLogo.png &quot;Snap Creative Learning Lab&quot;)

In this space we will also try to document other functionalities. It wants to be Just one click to get tools and resources to play Snap! in your own Creative Learning Lab. Just connecting Microblocks and UNOs, making with your laser cutters, plotters, embroidery machines and 3D printers, embedding and managing Snap! and Microblocks projects from your school Moodle…

![Creative Learning Lab](http://creativelearninglab.click/img/cllcLogo.png &quot;Creative Learning Lab&quot;)

## Play live S4A!
This is not a workshop, but maybe you want to play live the new library. If you are online, any UNO board allows you to test S4A Connector. For in person participants, some devices will be available for everybody.</abstract>
<description>## Snap4Arduino
For over ten years, Snap4Arduino has allowed us to connect Snap! to different boards (UNO, Nano, Mega, Leonardo, Micro, Due, 101, ESP8266, NodeMCU...) to obtain data from multiple sensors and control devices from our virtual projects made with Snap!
With the arrival of Microblocks, now we can apply our dynamic programming (in the Smalltalk way) directly inside our microcontrollers... and we can also easily connect them to Snap! Therefore, we can create different projects (using Snap! for the virtual world and Microblocks for the physical one) and build interactions between them.

However, we often still want to use simple boards that are not compatible with Microblocks (basically UNOs), or we simply prefer to use a single environment (Snap!) to create our projects and use actuators and sensors as &quot;direct&quot; complements to them. That is why it still makes sense to maintain this &quot;direct connection&quot; between Snap! and those classic boards.

## S4A Connector
We present the new extension **S4A Connector** to perform this task. **&quot;Snap! for All firmata boards Connector&quot;** wants to continue the work of S4A and Snap4Arduino and offer a build-in extension inside the official Snap itself.
It will no longer be necessary to maintain a special Snap! distro, nor install any plugin to our browser...
Directly from Snap! using a Chromium-based browser (Chrome, Edge...) we will connect to our &quot;firmata&quot; devices.

![S4A Connector basic blocks](http://creativelearninglab.click/img/basic.png &quot;S4A Connector basic blocks&quot;)

![S4A Connector advanced blocks](http://creativelearninglab.click/img/advanced.png &quot;S4A Connector advanced blocks&quot;)

## Snap Creative Learning Lab

And in order not to overload Snap! but continue to offer all those functionalities that schools need (a firmware uploader, libraries and templates for different hardware used in schools...) we also present a new web space **[Snap! Creative Learning Lab](https://snap.creativelearninglab.click/?redirect=0&amp;lang=en)**

![Snap Creative Learning Lab](http://creativelearninglab.click/img/scllcLogo.png &quot;Snap Creative Learning Lab&quot;)

In this space we will also try to document other functionalities. It wants to be Just one click to get tools and resources to play Snap! in your own Creative Learning Lab. Just connecting Microblocks and UNOs, making with your laser cutters, plotters, embroidery machines and 3D printers, embedding and managing Snap! and Microblocks projects from your school Moodle…

![Creative Learning Lab](http://creativelearninglab.click/img/cllcLogo.png &quot;Creative Learning Lab&quot;)

## Play live S4A!
This is not a workshop, but maybe you want to play live the new library. If you are online, any UNO board allows you to test S4A Connector. For in person participants, some devices will be available for everybody.</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='10'>Joan Guillén</person>
</persons>
</event>
<event guid='U6kOqP0ISKyQzF3BoEI-VQ' id='749'>
<date>2025-09-01T16:45:00+02:00</date>
<start>14:45</start>
<duration>00:15</duration>
<room>SAP Immersive Experience Studio</room>
<type>Talk</type>
<language></language>
<slug>749-mobiwebx-a-saas-ide-to-build-mobile-web-apps-100x-faster</slug>
<title>MobiWebX: A SaaS IDE to Build Mobile &amp; Web Apps 100x Faster</title>
<subtitle>Design Once, Run Apps and Webs Anywhere</subtitle>
<track></track>
<abstract>In a digital-first world, the demand for efficient, cross-platform applications is at an all-time high. Traditional development approaches often require separate toolchains and teams for mobile and web, leading to inflated costs and delayed delivery. MobiWebX offers a game-changing alternative: a SaaS-based Integrated Development Environment (IDE) that enables developers to build and deploy high-quality mobile and web apps up to 100 times faster.

With its intuitive GUI editor, Snap! for logic design and unified codebase, MobiWebX streamlines the entire app creation process—eliminating complex coding and enabling rapid prototyping. Beyond basic applications, MobiWebX is engineered to support the development of super apps that integrate multiple services into a single user experience. The platform provides powerful scalability, full cross-platform compatibility, and deep customization capabilities to meet diverse project needs.

MobiWebX web link: https://iot.ttu.edu.tw/SnapIonic8.1/test/</abstract>
<description>In a digital-first world, the demand for efficient, cross-platform applications is at an all-time high. Traditional development approaches often require separate toolchains and teams for mobile and web, leading to inflated costs and delayed delivery. MobiWebX offers a game-changing alternative: a SaaS-based Integrated Development Environment (IDE) that enables developers to build and deploy high-quality mobile and web apps up to 100 times faster.

With its intuitive GUI editor, Snap! for logic design and unified codebase, MobiWebX streamlines the entire app creation process—eliminating complex coding and enabling rapid prototyping. Beyond basic applications, MobiWebX is engineered to support the development of super apps that integrate multiple services into a single user experience. The platform provides powerful scalability, full cross-platform compatibility, and deep customization capabilities to meet diverse project needs.

MobiWebX web link: https://iot.ttu.edu.tw/SnapIonic8.1/test/</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='9361'>fuchiungcheng</person>
</persons>
</event>
<event guid='j-f_rEQkvr1388qHPrNheg' id='751'>
<date>2025-09-01T17:00:00+02:00</date>
<start>15:00</start>
<duration>00:15</duration>
<room>SAP Immersive Experience Studio</room>
<type>Talk</type>
<language></language>
<slug>751-code-art-and-embroidery</slug>
<title>Code, Art, and Embroidery</title>
<subtitle>Recreating Clive Richards’ 100 Cubes (1973)</subtitle>
<track></track>
<abstract>In 1973 Clive Richards created his 100 cubes art work using a line plotter.   It consists of a 10 × 10 grid of cubes, each rotated 10 degrees along either the x or y axis, creating a fascinating effect through the repeated rotations. 

Realising the similarities between a line plotter and embroidery machine, I set out to recreate Clive Richards, 1973 100 cubes art work using TurtleStitch, to explore how the stitched version compared to the original. 

In this presentation I&#39;ll share how I re-created Clive&#39;s work using an embroidery machine to translate it into thread and fabric. By translating this concept into thread and fabric, the embroidery machine becomes a modern day plotter.

Clive Richard&#39;s work is in the Victoria and Albert Museum (V&amp;A) in London.  
https://collections.vam.ac.uk/item/O1327456/100-cubes-plotter-drawing-richards-clive/
</abstract>
<description>In 1973 Clive Richards created his 100 cubes art work using a line plotter.   It consists of a 10 × 10 grid of cubes, each rotated 10 degrees along either the x or y axis, creating a fascinating effect through the repeated rotations. 

Realising the similarities between a line plotter and embroidery machine, I set out to recreate Clive Richards, 1973 100 cubes art work using TurtleStitch, to explore how the stitched version compared to the original. 

In this presentation I&#39;ll share how I re-created Clive&#39;s work using an embroidery machine to translate it into thread and fabric. By translating this concept into thread and fabric, the embroidery machine becomes a modern day plotter.

Clive Richard&#39;s work is in the Victoria and Albert Museum (V&amp;A) in London.  
https://collections.vam.ac.uk/item/O1327456/100-cubes-plotter-drawing-richards-clive/
</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='391'>Margaret Low</person>
<person id='2846'>RobJLow</person>
</persons>
</event>
<event guid='vBPCrbK68u5L3LEHWcgWQw' id='829'>
<date>2025-09-01T17:15:00+02:00</date>
<start>15:15</start>
<duration>00:15</duration>
<room>SAP Immersive Experience Studio</room>
<type>Talk</type>
<language></language>
<slug>829-glassboxing-and-modeling-decisions-in-simulations</slug>
<title>Glassboxing and modeling decisions in simulations</title>
<subtitle>Answering the Question: Why are cod getting smaller?</subtitle>
<track></track>
<abstract>Computer simulations have established themselves as the third pillar of science. Simulations translate technical models into dynamic, interactive scenarios and allow phenomena to be investigated, predictions to be made and hypotheses to be tested - especially when real experiments are difficult or impossible to carry out. Accordingly, computer modeling is also gaining importance beyond computer science. 

Critical engagement with simulations in the classroom therefore requires not only domain specific knowledge, but also computer modeling skills. We suggest using targeted glassboxing to support the user in understanding the subject and bridge this gap, by giving a means of comprehending the computer model behind the simulation.
To allow for this to happen we propose a categorization of simulations, which depend on the support they grant to the user in understanding the underlying models.

To demonstrate this we pose you a question: Ever wondered why cod are getting smaller? This Snap!-based cod-simulation was developed to explore how human actions—like intensive fishing—can lead to evolutionary changes in animal populations. Using the example of the cod, the game helps students understand the connections between selection pressure and long-term biological change.

The game was originally used in teacher training as a paper-based activity and has now been turned into a digital version for use in the classroom. 
Created by computer science education students as part of a Master’s seminar on agile software development, the project was carried out in collaboration with the biology education department. The development process followed agile principles like iterative design, regular feedback, and interdisciplinary teamwork.

We will use the simulation to give a practical example and encourage others to use the opportunity of translating a physical simulation into a glassboxed Snap! simulation and make visible some of the the modeling decisions that are otherwise made subconsciously.
</abstract>
<description>Computer simulations have established themselves as the third pillar of science. Simulations translate technical models into dynamic, interactive scenarios and allow phenomena to be investigated, predictions to be made and hypotheses to be tested - especially when real experiments are difficult or impossible to carry out. Accordingly, computer modeling is also gaining importance beyond computer science. 

Critical engagement with simulations in the classroom therefore requires not only domain specific knowledge, but also computer modeling skills. We suggest using targeted glassboxing to support the user in understanding the subject and bridge this gap, by giving a means of comprehending the computer model behind the simulation.
To allow for this to happen we propose a categorization of simulations, which depend on the support they grant to the user in understanding the underlying models.

To demonstrate this we pose you a question: Ever wondered why cod are getting smaller? This Snap!-based cod-simulation was developed to explore how human actions—like intensive fishing—can lead to evolutionary changes in animal populations. Using the example of the cod, the game helps students understand the connections between selection pressure and long-term biological change.

The game was originally used in teacher training as a paper-based activity and has now been turned into a digital version for use in the classroom. 
Created by computer science education students as part of a Master’s seminar on agile software development, the project was carried out in collaboration with the biology education department. The development process followed agile principles like iterative design, regular feedback, and interdisciplinary teamwork.

We will use the simulation to give a practical example and encourage others to use the opportunity of translating a physical simulation into a glassboxed Snap! simulation and make visible some of the the modeling decisions that are otherwise made subconsciously.
</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='10092'>Jan Schlenzka</person>
</persons>
</event>
<event guid='7by50iuEcsThgXAL4Ivskw' id='852'>
<date>2025-09-01T17:30:00+02:00</date>
<start>15:30</start>
<duration>00:10</duration>
<room>SAP Immersive Experience Studio</room>
<type>Talks Discussions</type>
<language></language>
<slug>852-talks-discussions</slug>
<title>Talks Discussions</title>
<subtitle>session 1</subtitle>
<track></track>
<abstract>let&#39;s discuss what we just learned. </abstract>
<description>let&#39;s discuss what we just learned. </description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
</persons>
</event>
<event guid='0_DcCTG0p8IgGkTF8njhLg' id='833'>
<date>2025-09-01T17:40:00+02:00</date>
<start>15:40</start>
<duration>00:15</duration>
<room>SAP Immersive Experience Studio</room>
<type>Talk</type>
<language></language>
<slug>833-promoting-digital-literacy-with-snap</slug>
<title>Promoting Digital Literacy with Snap!</title>
<subtitle>an experience report</subtitle>
<track></track>
<abstract>In this talk, I will give a demo of the Snap! private cloud developed for low resource not well connected areas and share my recent experiences about bringing Snap! to Eritrea and which challenges and successes I faced. </abstract>
<description>In this talk, I will give a demo of the Snap! private cloud developed for low resource not well connected areas and share my recent experiences about bringing Snap! to Eritrea and which challenges and successes I faced. </description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='5563'>Negash</person>
</persons>
</event>
<event guid='WTZJ3ifYJt57GbyoFVLLZg' id='827'>
<date>2025-09-01T17:55:00+02:00</date>
<start>15:55</start>
<duration>00:15</duration>
<room>SAP Immersive Experience Studio</room>
<type>Talk</type>
<language></language>
<slug>827-snapal-a-tutor-for-every-student</slug>
<title>SnaPal: a tutor for every student</title>
<subtitle>A voice-AI that can answer all your Snap! related questions!</subtitle>
<track></track>
<abstract>Snap!Pal is an intelligent, voice-based educational tool that is designed to help students who are learning computer science principles through Snap!. By utilizing Voice-AI for conversational interactions and retrieval augmented generation (RAG) for context-aware responses that are trained on the Snap! Documentation SnaPal can give students immediate and accurate answers to any questions they face while coding. We would love to discuss with the Snap! community on how we could make this tool better and more useful in classrooms, be it training on additional data like the Snap! Forum, integrating with teaching plans, or anything else.

</abstract>
<description>Snap!Pal is an intelligent, voice-based educational tool that is designed to help students who are learning computer science principles through Snap!. By utilizing Voice-AI for conversational interactions and retrieval augmented generation (RAG) for context-aware responses that are trained on the Snap! Documentation SnaPal can give students immediate and accurate answers to any questions they face while coding. We would love to discuss with the Snap! community on how we could make this tool better and more useful in classrooms, be it training on additional data like the Snap! Forum, integrating with teaching plans, or anything else.

</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='1044'>Yuan Garcia</person>
<person id='10039'>Arihant Choudhary</person>
</persons>
</event>
<event guid='kQeEfioLmxjQJI89F7QpIw' id='813'>
<date>2025-09-01T18:10:00+02:00</date>
<start>16:10</start>
<duration>00:15</duration>
<room>SAP Immersive Experience Studio</room>
<type>Talk</type>
<language></language>
<slug>813-tech-together-s-ai-techathon</slug>
<title>Tech Together&#39;s AI Techathon</title>
<subtitle>Broadening Female Participation in Computing Through Near-Peer Tutoring</subtitle>
<track></track>
<abstract>Historically, women have been underrepresented in computer science and STEM careers. In response to this disparity, we decided to start early and ignite the passion for computer science in middle school girls. In late 2023, we (two high school girls) created Tech Together, dedicated to sharing our passion for computing with younger girls.
   By offering individual coaching, online, and in-person group sessions, partnerships with Girl Scouts, and AI Techathons, Tech Together provided opportunities for more than 135 students to explore computing concepts, build problem-solving skills, and foster a sense of community. The program emphasized fun, hands-on activities, and collaborative learning. In addition to this, our Snap!-centered AI Techathon lasted the span of a weekend and featured more than 20 participants using Snap and other programs in collaboration with AI to create projects that would then be showcased and judged.
Over the twelve months of our initiative, we saw an increase in female participation and engagement, highlighting the importance of near-peer role models and programming events.
</abstract>
<description>Historically, women have been underrepresented in computer science and STEM careers. In response to this disparity, we decided to start early and ignite the passion for computer science in middle school girls. In late 2023, we (two high school girls) created Tech Together, dedicated to sharing our passion for computing with younger girls.
   By offering individual coaching, online, and in-person group sessions, partnerships with Girl Scouts, and AI Techathons, Tech Together provided opportunities for more than 135 students to explore computing concepts, build problem-solving skills, and foster a sense of community. The program emphasized fun, hands-on activities, and collaborative learning. In addition to this, our Snap!-centered AI Techathon lasted the span of a weekend and featured more than 20 participants using Snap and other programs in collaboration with AI to create projects that would then be showcased and judged.
Over the twelve months of our initiative, we saw an increase in female participation and engagement, highlighting the importance of near-peer role models and programming events.
</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='597'>Talia Ye</person>
<person id='4969'>Parinaz Dastur</person>
</persons>
</event>
<event guid='6WE5j8V-1wKsaOZZnSDHZA' id='819'>
<date>2025-09-01T18:25:00+02:00</date>
<start>16:25</start>
<duration>00:15</duration>
<room>SAP Immersive Experience Studio</room>
<type>Talk</type>
<language></language>
<slug>819-explainable-generative-ai-with-n-grams</slug>
<title>Explainable Generative AI with N-grams</title>
<subtitle></subtitle>
<track></track>
<abstract>This talk is an extension of the wonderful N-gram Generative AI project demonstrated at Robolot 2024. I was absolutely captivated by it, but always wondered which of the original source materials were being drawn from for a particular word or note. I extended the project to include that, visualized as a Snap! table view with columns of each of the sources, and the values in the rows are a heart (❤️) to indicate that song/story was being chosen, or circle (⚫) to indicate a song/story could have been chosen but was not. 

![Program Screenshot](https://people.eecs.berkeley.edu/~ddgarcia/tmp/ExplainableAI.png &quot;Explainable N-Gram Generative AI demo on songs&quot;)

Here&#39;s the project: https://snap.berkeley.edu/snap/snap.html#present:Username=dan%20garcia&amp;ProjectName=SnapGPT%20Explainable%20AI%20snapcon%202025</abstract>
<description>This talk is an extension of the wonderful N-gram Generative AI project demonstrated at Robolot 2024. I was absolutely captivated by it, but always wondered which of the original source materials were being drawn from for a particular word or note. I extended the project to include that, visualized as a Snap! table view with columns of each of the sources, and the values in the rows are a heart (❤️) to indicate that song/story was being chosen, or circle (⚫) to indicate a song/story could have been chosen but was not. 

![Program Screenshot](https://people.eecs.berkeley.edu/~ddgarcia/tmp/ExplainableAI.png &quot;Explainable N-Gram Generative AI demo on songs&quot;)

Here&#39;s the project: https://snap.berkeley.edu/snap/snap.html#present:Username=dan%20garcia&amp;ProjectName=SnapGPT%20Explainable%20AI%20snapcon%202025</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='317'>Dan Garcia</person>
</persons>
</event>
<event guid='L9uP1VRx07nJ9EFlhPtyAQ' id='853'>
<date>2025-09-01T18:40:00+02:00</date>
<start>16:40</start>
<duration>00:10</duration>
<room>SAP Immersive Experience Studio</room>
<type>Talks Discussions</type>
<language></language>
<slug>853-talks-discussions</slug>
<title>Talks Discussions</title>
<subtitle>session 2</subtitle>
<track></track>
<abstract>let&#39;s discuss what we just learned.</abstract>
<description>let&#39;s discuss what we just learned.</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
</persons>
</event>
<event guid='_toZKO-KSiHUA_Kr2GAV4A' id='841'>
<date>2025-09-01T18:50:00+02:00</date>
<start>16:50</start>
<duration>00:30</duration>
<room>SAP Immersive Experience Studio</room>
<type>Coffee Break</type>
<language></language>
<slug>841-coffee-break</slug>
<title>Coffee Break </title>
<subtitle>Monday Afternoon</subtitle>
<track></track>
<abstract>Grab a coffee and enjoy it at the SAP Immersive Experience Studio or from where you are joining us online. </abstract>
<description>Grab a coffee and enjoy it at the SAP Immersive Experience Studio or from where you are joining us online. </description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
</persons>
</event>
<event guid='JElsao2l8-0Zqi4BzXVNEw' id='828'>
<date>2025-09-01T19:20:00+02:00</date>
<start>17:20</start>
<duration>00:05</duration>
<room>SAP Immersive Experience Studio</room>
<type>Lightning Talk</type>
<language></language>
<slug>828-creating-art-installations-with-snap</slug>
<title>Creating Art Installations with Snap! </title>
<subtitle>Embodied Code, Interactive Liveness</subtitle>
<track></track>
<abstract>“Embodied Code, Interactive Liveness” is an art installation that aims to explore live coding as a paradigmatic example of the liminal space between the virtual and the real. With a multi-layered approach based on three interaction modes, this work focuses on embodiment, code, interaction and liveness. It is built on an educational approach aiming at dissolving the boundaries between programmer and participant. 

Mimicking a live-coding session, the work presents two interconnected modules: one for generating audio and one for real-time visuals. Both modules react to the user’s presence while, at the same time, offering an interface to directly interact with some variables and manipulate the code that is producing such sound and visuals. The sound module is programmed in MicroBlocks. The visual module is programmed in Snap! using the performer mode in order to simultaneously show the code and its visual results. While both modules are essential to the overall experience in the piece, this presentation focuses on the Snap! visual module to exemplify and explain the overall approach. 

Participants are invited to interact at different levels. The first is body interaction. As participants approach the installation, a camera will sense their presence and start activating and manipulating a series of parameters. Since this happens while the code is shown, what is affected by the users’ movements is visible at all times. 

Once participants become familiar with the embodied interaction, they are invited to progress to the second level. In it the experience shifts closer to the role of a programmer. Here, a trackpad and a numerical keyboard are offered to manipulate the parameters. Additionally, using a custom-crafted hardware interface device the audience can load different scripts or reset the one they are working on, should they wish to revert the changes made. This transition from body-based to manual control offers a first step towards the participants&#39; understanding of live coding, while still maintaining the playful nature of the experience.

At the third level, participants are given total control over the code, as they can freely interact with the Snap! blocks along with the parameters. This free interaction lets them modify anything they wish without any safeguards. This introduces the concepts of error and control, while keeping the reset button as a safeguard of a system that can be endlessly repaired.

In exploring several aspects of live coding through embodied and interface interactions, and combining creative, educational, and collaborative elements, this piece aims at creating a reflective experience where participants think about the connection between the body, code, and creativity. It encourages users to interact with the system in a fun and simple way, allowing them to explore live coding by moving their bodies, making what is often a technical or complex process into something that feels accessible and enjoyable.

Acknowledgements: Grant PID2021-128875NA-I00 funded by MCIN/AEI/10.13039/501100011033 and &quot;ERDF A way of making Europe”.

Authors: Enric Mor, Joan Soler-Adillon, Bernat Romagosa, Laia Blasco-Soplón, Jonathan Chacón
</abstract>
<description>“Embodied Code, Interactive Liveness” is an art installation that aims to explore live coding as a paradigmatic example of the liminal space between the virtual and the real. With a multi-layered approach based on three interaction modes, this work focuses on embodiment, code, interaction and liveness. It is built on an educational approach aiming at dissolving the boundaries between programmer and participant. 

Mimicking a live-coding session, the work presents two interconnected modules: one for generating audio and one for real-time visuals. Both modules react to the user’s presence while, at the same time, offering an interface to directly interact with some variables and manipulate the code that is producing such sound and visuals. The sound module is programmed in MicroBlocks. The visual module is programmed in Snap! using the performer mode in order to simultaneously show the code and its visual results. While both modules are essential to the overall experience in the piece, this presentation focuses on the Snap! visual module to exemplify and explain the overall approach. 

Participants are invited to interact at different levels. The first is body interaction. As participants approach the installation, a camera will sense their presence and start activating and manipulating a series of parameters. Since this happens while the code is shown, what is affected by the users’ movements is visible at all times. 

Once participants become familiar with the embodied interaction, they are invited to progress to the second level. In it the experience shifts closer to the role of a programmer. Here, a trackpad and a numerical keyboard are offered to manipulate the parameters. Additionally, using a custom-crafted hardware interface device the audience can load different scripts or reset the one they are working on, should they wish to revert the changes made. This transition from body-based to manual control offers a first step towards the participants&#39; understanding of live coding, while still maintaining the playful nature of the experience.

At the third level, participants are given total control over the code, as they can freely interact with the Snap! blocks along with the parameters. This free interaction lets them modify anything they wish without any safeguards. This introduces the concepts of error and control, while keeping the reset button as a safeguard of a system that can be endlessly repaired.

In exploring several aspects of live coding through embodied and interface interactions, and combining creative, educational, and collaborative elements, this piece aims at creating a reflective experience where participants think about the connection between the body, code, and creativity. It encourages users to interact with the system in a fun and simple way, allowing them to explore live coding by moving their bodies, making what is often a technical or complex process into something that feels accessible and enjoyable.

Acknowledgements: Grant PID2021-128875NA-I00 funded by MCIN/AEI/10.13039/501100011033 and &quot;ERDF A way of making Europe”.

Authors: Enric Mor, Joan Soler-Adillon, Bernat Romagosa, Laia Blasco-Soplón, Jonathan Chacón
</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='380'>Enric Mor</person>
</persons>
</event>
<event guid='YHaeia2DiNmQgT57PCkbhw' id='750'>
<date>2025-09-01T19:25:00+02:00</date>
<start>17:25</start>
<duration>00:05</duration>
<room>SAP Immersive Experience Studio</room>
<type>Lightning Talk</type>
<language></language>
<slug>750-mathematics-and-computer-science</slug>
<title>Mathematics and Computer Science</title>
<subtitle>TurtleStitch as bridge between Math and CS</subtitle>
<track></track>
<abstract>The connection between Math and CS is strong in theory. We share language, concepts and methods. In education, however, the fields are strongly separated. We don&#39;t help each other, sometimes even dislike each other. Summer of 2025 I&#39;ll use TurtleStitch at two conferences to bridge the differences. First at [Bridges2025](https://www.bridgesmathart.org/b2025/) which aims for bridges between Math and Art. After that at [TurtleStitch10](https://xota.github.io/turtlestitch10/) Fest where 10 years of TurtleStitching is celebrated.

</abstract>
<description>The connection between Math and CS is strong in theory. We share language, concepts and methods. In education, however, the fields are strongly separated. We don&#39;t help each other, sometimes even dislike each other. Summer of 2025 I&#39;ll use TurtleStitch at two conferences to bridge the differences. First at [Bridges2025](https://www.bridgesmathart.org/b2025/) which aims for bridges between Math and Art. After that at [TurtleStitch10](https://xota.github.io/turtlestitch10/) Fest where 10 years of TurtleStitching is celebrated.

</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='9'>Joek van Montfort</person>
</persons>
</event>
<event guid='FfWqK5LP8gTMIwPrbWDHEw' id='810'>
<date>2025-09-01T19:30:00+02:00</date>
<start>17:30</start>
<duration>00:05</duration>
<room>SAP Immersive Experience Studio</room>
<type>Lightning Talk</type>
<language></language>
<slug>810-code-in-the-bag</slug>
<title>Code in The Bag</title>
<subtitle>A dynamic, human built repository for coding related skills</subtitle>
<track></track>
<abstract>Almost a decade ago I developed a course curriculum called &quot;Code Crafting&quot; in which I introduced the foundations of programming via Fiber Arts (crochet, machine embroidery, quilting).  As an undergraduate experience it provided a flavor of what it meant to program, but didn&#39;t give guidance about how to become a master. Adaptations to the course for middle school afterschool programs, and for fiber arts guilds showed the glaring inadequacies for learning either programming, math or fiber arts from internet search, and more recently from Generative AI (e.g. Copilot). This lightening talk introduces an online resource that provides a multi-disciplinary network of nodes curated and written by humans.  We are using it as the foundation of an informal local learning initiative in rural Vermont to address the need for non-commercial secure access to information about computer science, the internet, math, communication and crafting.  Why &quot;Code in the Bag&quot;?  Because we make bags using needle (or hook) and thread (or yarn) on soft material based on ancient traditions of &quot;sewing circles&quot; and &quot;quilting bees&quot;. The software is an attempt to capture the rapidly very specific diminishing knowledge of crafters. This is not only about how to do something in a domain, but how universal human problem solving threads together intersecting domains of interest and brings people together to communicate directly with each other. Turtlestitch will be used to illustrate how all communication is in &quot;code&quot;, and once you know the code you can &#39;speak like a native&#39; while making beautiful things. Spoiler alert: the paragraph above was written before we held three 90 minute &#39;tastings&#39; on crochet, embroidery and quilting respectively.  Human engagement and learning: 3.  Machine engagement 0. </abstract>
<description>Almost a decade ago I developed a course curriculum called &quot;Code Crafting&quot; in which I introduced the foundations of programming via Fiber Arts (crochet, machine embroidery, quilting).  As an undergraduate experience it provided a flavor of what it meant to program, but didn&#39;t give guidance about how to become a master. Adaptations to the course for middle school afterschool programs, and for fiber arts guilds showed the glaring inadequacies for learning either programming, math or fiber arts from internet search, and more recently from Generative AI (e.g. Copilot). This lightening talk introduces an online resource that provides a multi-disciplinary network of nodes curated and written by humans.  We are using it as the foundation of an informal local learning initiative in rural Vermont to address the need for non-commercial secure access to information about computer science, the internet, math, communication and crafting.  Why &quot;Code in the Bag&quot;?  Because we make bags using needle (or hook) and thread (or yarn) on soft material based on ancient traditions of &quot;sewing circles&quot; and &quot;quilting bees&quot;. The software is an attempt to capture the rapidly very specific diminishing knowledge of crafters. This is not only about how to do something in a domain, but how universal human problem solving threads together intersecting domains of interest and brings people together to communicate directly with each other. Turtlestitch will be used to illustrate how all communication is in &quot;code&quot;, and once you know the code you can &#39;speak like a native&#39; while making beautiful things. Spoiler alert: the paragraph above was written before we held three 90 minute &#39;tastings&#39; on crochet, embroidery and quilting respectively.  Human engagement and learning: 3.  Machine engagement 0. </description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='2191'>Ursula Wolz</person>
</persons>
</event>
<event guid='mlBRrOk9G7KStP0Snn1XsA' id='837'>
<date>2025-09-01T19:35:00+02:00</date>
<start>17:35</start>
<duration>00:05</duration>
<room>SAP Immersive Experience Studio</room>
<type>Lightning Talk</type>
<language></language>
<slug>837-codepainter-snap</slug>
<title>CodePainter@Snap! </title>
<subtitle>Understand the Snap!-Code better!   </subtitle>
<track></track>
<abstract>CodePainter@Snap! will help kids doing their first steps in coding, get a first impression what code blocks are standing for and learn what happens inside a coding sequence. A Computer is no magic at all.

You will see simplified coding in an easy to read pseudo code language. By starting at the top and following the coding blocks step by step just as the processor is doing it, you will paint the resulting steps on the screen.
You will use the four arrow-keys &quot;right&quot;, &quot;left&quot;, &quot;up&quot; and &quot;down&quot; to walk through the coding. The space bar will lead you to the next coding example.
Looks quite easy - however you will need to focus on what is the next step especially when you are in a loop or have to follow conditions.

Being experienced with lists in Snap! you can create you own examples.
</abstract>
<description>CodePainter@Snap! will help kids doing their first steps in coding, get a first impression what code blocks are standing for and learn what happens inside a coding sequence. A Computer is no magic at all.

You will see simplified coding in an easy to read pseudo code language. By starting at the top and following the coding blocks step by step just as the processor is doing it, you will paint the resulting steps on the screen.
You will use the four arrow-keys &quot;right&quot;, &quot;left&quot;, &quot;up&quot; and &quot;down&quot; to walk through the coding. The space bar will lead you to the next coding example.
Looks quite easy - however you will need to focus on what is the next step especially when you are in a loop or have to follow conditions.

Being experienced with lists in Snap! you can create you own examples.
</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='10587'>Gerd Ruehle</person>
<person id='596'>Volker Enders</person>
</persons>
</event>
<event guid='sw51hmZqFX-1ESORn_r9Zw' id='802'>
<date>2025-09-01T19:40:00+02:00</date>
<start>17:40</start>
<duration>00:05</duration>
<room>SAP Immersive Experience Studio</room>
<type>Lightning Talk</type>
<language></language>
<slug>802-build-a-microblocks-synthesizer-and-play-it-from-snap</slug>
<title>Build a MicroBlocks synthesizer and play it from Snap!</title>
<subtitle></subtitle>
<track></track>
<abstract>The new MicroBlocks library for analog sound synthesis allows us to generate all sorts of synthesized sounds. The library was envisioned for live coding music, but nothing stops us from using it to create our own digital instruments.

Additionally, the MicroBlocks library for Snap! makes interoperability between the two languages very transparent and easy, allowing us to design a graphical interface for our instrument in Snap! and have the microcontroller produce the sounds.</abstract>
<description>The new MicroBlocks library for analog sound synthesis allows us to generate all sorts of synthesized sounds. The library was envisioned for live coding music, but nothing stops us from using it to create our own digital instruments.

Additionally, the MicroBlocks library for Snap! makes interoperability between the two languages very transparent and easy, allowing us to design a graphical interface for our instrument in Snap! and have the microcontroller produce the sounds.</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='7'>Bernat Romagosa</person>
</persons>
</event>
<event guid='SJQPWuxNloHB7MxJu4jAuA' id='825'>
<date>2025-09-01T19:45:00+02:00</date>
<start>17:45</start>
<duration>00:05</duration>
<room>SAP Immersive Experience Studio</room>
<type>Lightning Talk</type>
<language></language>
<slug>825-draw-with-numbers-and-make</slug>
<title>Draw with numbers and make.</title>
<subtitle>3D print over textiles (3D grids), Laser cutting for wooden garments (2D grids) Laser cutting for seamless unions (2D interlocks)</subtitle>
<track></track>
<abstract>Block-code instructional activities allowing to have an introductory guide for this coding language throw three digital fabrication and experimental techinques on the area of textile design: 3D print over textiles (3D grids), Laser cutting for wooden garments (2D grids) and Laser cutting for seamless unions (2D interlocks)</abstract>
<description>Block-code instructional activities allowing to have an introductory guide for this coding language throw three digital fabrication and experimental techinques on the area of textile design: 3D print over textiles (3D grids), Laser cutting for wooden garments (2D grids) and Laser cutting for seamless unions (2D interlocks)</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='10052'>Luis Mayorga</person>
</persons>
</event>
<event guid='MwOqQj6hkTnj_ILkIcgtqg' id='815'>
<date>2025-09-01T19:50:00+02:00</date>
<start>17:50</start>
<duration>00:05</duration>
<room>SAP Immersive Experience Studio</room>
<type>Lightning Talk</type>
<language></language>
<slug>815-voice-programming-with-snap</slug>
<title>Voice programming with Snap!</title>
<subtitle>Connecting an offline voice reognition devica via Microlocks to create programs with Snap!</subtitle>
<track></track>
<abstract>Connecting an offline [voice recognition device](https://www.dfrobot.com/product-2665.html) via [MicroBlocks](https://microblocks.fun) to create programs with Snap!

This experiment started during a demo for an online voice recognition device for MicroBlocks. Since this device can be trained to recognize new words, the idea was: why not train some block name commands to use them in Snap!

Because the connection between MicroBlocks and Snap! is trivial, I started some naive experiments that I will show in this talk.</abstract>
<description>Connecting an offline [voice recognition device](https://www.dfrobot.com/product-2665.html) via [MicroBlocks](https://microblocks.fun) to create programs with Snap!

This experiment started during a demo for an online voice recognition device for MicroBlocks. Since this device can be trained to recognize new words, the idea was: why not train some block name commands to use them in Snap!

Because the connection between MicroBlocks and Snap! is trivial, I started some naive experiments that I will show in this talk.</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='394'>José García</person>
</persons>
</event>
<event guid='8ejk5upEeeDZrdHTh6e5Nw' id='821'>
<date>2025-09-01T19:55:00+02:00</date>
<start>17:55</start>
<duration>00:05</duration>
<room>SAP Immersive Experience Studio</room>
<type>Lightning Talk</type>
<language></language>
<slug>821-working-with-data-in-turtlestitch</slug>
<title>Working with Data in TurtleStitch</title>
<subtitle>tbd</subtitle>
<track></track>
<abstract>This is a placeholder for the lightning talk I&#39;m coming up with soon. </abstract>
<description>This is a placeholder for the lightning talk I&#39;m coming up with soon. </description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='312'>Jadga Hügle</person>
</persons>
</event>
<event guid='x2tFTKsjJGBR0XnrogYbLQ' id='812'>
<date>2025-09-01T20:00:00+02:00</date>
<start>18:00</start>
<duration>00:05</duration>
<room>SAP Immersive Experience Studio</room>
<type>Lightning Talk</type>
<language></language>
<slug>812-bloxbuddy</slug>
<title>BloxBuddy</title>
<subtitle>An AI assistant that won&#39;t take over</subtitle>
<track></track>
<abstract>We demonstrate an early version of BloxBuddy, an AI assistant designed for NetsBlox, a variant of Snap_!_. Most language models have difficulty working with Snap_!_’s block-based code, but by building off of Snap_!_’s new ‘Lisp code’ feature, we’ve been able to make block-based code easier for them to understand. Combined with access to the NetsBlox documentation, BloxBuddy is able to answer many questions about a user’s code. This can make block-based programming even more accessible!

At the same time, we want to avoid making a tool students rely on to do their work for them and also avoid exposing students to potentially harmful unrestricted AI outputs. BloxBuddy suggests ideas and advice, rather than moving blocks itself, so users can get assistance but still learn Snap_!_. It provides the available questions rather than using a free-form text input, steering the conversation toward productive topics.

If you are interested in helping test out BloxBuddy, please contact gordon.stein@vanderbilt.edu</abstract>
<description>We demonstrate an early version of BloxBuddy, an AI assistant designed for NetsBlox, a variant of Snap_!_. Most language models have difficulty working with Snap_!_’s block-based code, but by building off of Snap_!_’s new ‘Lisp code’ feature, we’ve been able to make block-based code easier for them to understand. Combined with access to the NetsBlox documentation, BloxBuddy is able to answer many questions about a user’s code. This can make block-based programming even more accessible!

At the same time, we want to avoid making a tool students rely on to do their work for them and also avoid exposing students to potentially harmful unrestricted AI outputs. BloxBuddy suggests ideas and advice, rather than moving blocks itself, so users can get assistance but still learn Snap_!_. It provides the available questions rather than using a free-form text input, steering the conversation toward productive topics.

If you are interested in helping test out BloxBuddy, please contact gordon.stein@vanderbilt.edu</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='2020'>Gordon Stein</person>
</persons>
</event>
<event guid='Kc0ZbZ7bz0MGL7ydWpmdEw' id='856'>
<date>2025-09-01T20:05:00+02:00</date>
<start>18:05</start>
<duration>00:10</duration>
<room>SAP Immersive Experience Studio</room>
<type>Talks Discussions</type>
<language></language>
<slug>856-lightning-talks-discussions</slug>
<title>Lightning Talks Discussions</title>
<subtitle></subtitle>
<track></track>
<abstract>let&#39;s discuss what we just learned.</abstract>
<description>let&#39;s discuss what we just learned.</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
</persons>
</event>
<event guid='rkpktpnWaCW4SFVxRcov4g' id='823'>
<date>2025-09-01T20:15:00+02:00</date>
<start>18:15</start>
<duration>01:00</duration>
<room>SAP Immersive Experience Studio</room>
<type>Keynote</type>
<language></language>
<slug>823-snapping-to-snap</slug>
<title>Snapping to Snap!</title>
<subtitle></subtitle>
<track></track>
<abstract>Computation and programming have been embraced by artists and designers for more than 60 years, with pioneering examples such as Vera Molnar and Frieder Nake. However, it was in the 2000s that visual and interactive artworks experienced significant growth, driven by the emergence of programming languages that democratized access to technology for artists, designers, and creators, helping to establish fields such as creative coding, computational generative art, and interactive media art.
This talk presents approaches and explorations that highlight the role Snap! plays—and can play—in the context of the arts and design, especially in artistic creation and educating new generations of designers and artists. We will explore concepts such as interdisciplinarity; the interplay between design, art, and technology; whether the “A” is in STEAM; interactivity, embodiment, performativity, and live coding.</abstract>
<description>Computation and programming have been embraced by artists and designers for more than 60 years, with pioneering examples such as Vera Molnar and Frieder Nake. However, it was in the 2000s that visual and interactive artworks experienced significant growth, driven by the emergence of programming languages that democratized access to technology for artists, designers, and creators, helping to establish fields such as creative coding, computational generative art, and interactive media art.
This talk presents approaches and explorations that highlight the role Snap! plays—and can play—in the context of the arts and design, especially in artistic creation and educating new generations of designers and artists. We will explore concepts such as interdisciplinarity; the interplay between design, art, and technology; whether the “A” is in STEAM; interactivity, embodiment, performativity, and live coding.</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='380'>Enric Mor</person>
</persons>
</event>
<event guid='JOCN11aOV6EOtY2f5NyouQ' id='861'>
<date>2025-09-01T21:15:00+02:00</date>
<start>19:15</start>
<duration>01:15</duration>
<room>SAP Immersive Experience Studio</room>
<type>Food Break</type>
<language></language>
<slug>861-conference-dinner</slug>
<title>Conference Dinner</title>
<subtitle></subtitle>
<track></track>
<abstract>Let&#39;s feast :) </abstract>
<description>Let&#39;s feast :) </description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
</persons>
</event>
<event guid='yeqCeFpKKdq03F1DTUryrw' id='864'>
<date>2025-09-01T21:45:00+02:00</date>
<start>19:45</start>
<duration>00:15</duration>
<room>SAP Immersive Experience Studio</room>
<type>Address</type>
<language></language>
<slug>864-karlstrom-award-address-by-brian-harvey</slug>
<title>Karlstrom Award Address by Brian Harvey</title>
<subtitle></subtitle>
<track></track>
<abstract>2025 Karlstrom Award winner Brian Harvey will reflect on his journey through computing education.

https://awards.acm.org/karlstrom?fbclid=IwY2xjawKkOVtleHRuA2FlbQIxMQBicmlkETBtcDZ0em5RbXFpQksyQnhiAR4i9anpWApfBS8nVAiSIr1D_xtIXbteculxq0JkdV59GOa2K7IpYiVGdnPK7g_aem_Ho5Gi9P31nD4EGIlPddbDQ</abstract>
<description>2025 Karlstrom Award winner Brian Harvey will reflect on his journey through computing education.

https://awards.acm.org/karlstrom?fbclid=IwY2xjawKkOVtleHRuA2FlbQIxMQBicmlkETBtcDZ0em5RbXFpQksyQnhiAR4i9anpWApfBS8nVAiSIr1D_xtIXbteculxq0JkdV59GOa2K7IpYiVGdnPK7g_aem_Ho5Gi9P31nD4EGIlPddbDQ</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='945'>Brian Harvey</person>
</persons>
</event>
<event guid='EUwYWaAMdOAER4bS-9RBHQ' id='807'>
<date>2025-09-01T22:20:00+02:00</date>
<start>20:20</start>
<duration>00:05</duration>
<room>SAP Immersive Experience Studio</room>
<type>Show Us Your Projects</type>
<language></language>
<slug>807-snap-hack</slug>
<title>Snap!Hack</title>
<subtitle>Demo of progress made on implementing the classic roguelike Hack in Snap!</subtitle>
<track></track>
<abstract>Snap!Hack is an implementation of the Hack roguelike game created primarily by Jay Fenlason when he was a student of Brian Harvey&#39;s at Lincoln-Sudbury High School. The game is implemented in the Snap! programming language, which was co-created by Brian Harvey and Jens Mönig, as an homage to Brian Harvey&#39;s impact on computer science education.

Since the demo of my initial work on Snap!Hack at Snap!Shot 2024, I received a copy of the original Hack source code from Brian Harvey and corresponded with Jay Fenlason about reimplementing Hack in Snap! This demo will showcase the latest version of the game, based on the source code from 1982.</abstract>
<description>Snap!Hack is an implementation of the Hack roguelike game created primarily by Jay Fenlason when he was a student of Brian Harvey&#39;s at Lincoln-Sudbury High School. The game is implemented in the Snap! programming language, which was co-created by Brian Harvey and Jens Mönig, as an homage to Brian Harvey&#39;s impact on computer science education.

Since the demo of my initial work on Snap!Hack at Snap!Shot 2024, I received a copy of the original Hack source code from Brian Harvey and corresponded with Jay Fenlason about reimplementing Hack in Snap! This demo will showcase the latest version of the game, based on the source code from 1982.</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='3238'>Dan Stormont</person>
</persons>
</event>
<event guid='NpmkXHxy47kO5Ptatwa7BQ' id='822'>
<date>2025-09-01T22:25:00+02:00</date>
<start>20:25</start>
<duration>00:05</duration>
<room>SAP Immersive Experience Studio</room>
<type>Show Us Your Projects</type>
<language></language>
<slug>822-laaos</slug>
<title>LAAOS</title>
<subtitle>Live ASCII Art On Steroids</subtitle>
<track></track>
<abstract>tbd</abstract>
<description>tbd</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='49'>Jens Mönig</person>
</persons>
</event>
<event guid='h45vkiPYrBOx5id7R1K4YQ' id='863'>
<date>2025-09-01T22:30:00+02:00</date>
<start>20:30</start>
<duration>00:30</duration>
<room>SAP Immersive Experience Studio</room>
<type>Show Us Your Project and Live Coding Open Mic</type>
<language></language>
<slug>863-show-us-your-project-and-live-coding-open-mic</slug>
<title>Show us Your Project and Live Coding Open Mic </title>
<subtitle></subtitle>
<track></track>
<abstract>If you haven&#39;t submitted a project yet, but feel like showing something spontaneously, or if you want to show off your live coding abilities, this session is for you.

3-5 minutes, no slides, just projects and live coding. </abstract>
<description>If you haven&#39;t submitted a project yet, but feel like showing something spontaneously, or if you want to show off your live coding abilities, this session is for you.

3-5 minutes, no slides, just projects and live coding. </description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
</persons>
</event>
<event guid='g-a31B_xPKZlfQpvlmAy1A' id='862'>
<date>2025-09-01T23:00:00+02:00</date>
<start>21:00</start>
<duration>01:15</duration>
<room>SAP Immersive Experience Studio</room>
<type>Food Break</type>
<language></language>
<slug>862-conference-dinner</slug>
<title>Conference Dinner</title>
<subtitle></subtitle>
<track></track>
<abstract>Let&#39;s feast </abstract>
<description>Let&#39;s feast </description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
</persons>
</event>
<event guid='R7-CmLAkjvjMqhiX3uzNHw' id='860'>
<date>2025-09-01T23:45:00+02:00</date>
<start>21:45</start>
<duration>00:45</duration>
<room>SAP Immersive Experience Studio</room>
<type>Bus Transfer</type>
<language></language>
<slug>860-bus-tranfer-from-sap-to-heidelberg</slug>
<title>Bus Tranfer from SAP to Heidelberg</title>
<subtitle></subtitle>
<track></track>
<abstract>We should be back in Heidelberg around 22:15
Drop off can be around the corner from the central station and at the PH where we left. </abstract>
<description>We should be back in Heidelberg around 22:15
Drop off can be around the corner from the central station and at the PH where we left. </description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
</persons>
</event>
</room>
</day>
<day date='2025-09-02' index='2'>
<room name='Plenary Room'>
<event guid='gD5QlniBApN0479lnQqxZQ' id='839'>
<date>2025-09-02T12:00:00+02:00</date>
<start>10:00</start>
<duration>00:30</duration>
<room>Plenary Room</room>
<type>Coffee Break</type>
<language></language>
<slug>839-coffee-break</slug>
<title>Coffee Break</title>
<subtitle>Tuesday Morning</subtitle>
<track></track>
<abstract>Let&#39;s meet at the PH Aula for a coffee before the conference day starts.</abstract>
<description>Let&#39;s meet at the PH Aula for a coffee before the conference day starts.</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
</persons>
</event>
<event guid='YTbidJ6MJQsxcg4GDD8RPA' id='848'>
<date>2025-09-02T12:30:00+02:00</date>
<start>10:30</start>
<duration>01:30</duration>
<room>Plenary Room</room>
<type>Workshop</type>
<language></language>
<slug>848-what-makes-snap-special</slug>
<title>What makes Snap! special?</title>
<subtitle></subtitle>
<track></track>
<abstract>In this workshop, we&#39;ll share with you what we think makes Snap! special. 
From fun introductory media computation activities using built in graphic effects to writing your own functions. From creating block libraries to building your own small microworlds for a classroom.

Join us to find out what makes Snap! special :) </abstract>
<description>In this workshop, we&#39;ll share with you what we think makes Snap! special. 
From fun introductory media computation activities using built in graphic effects to writing your own functions. From creating block libraries to building your own small microworlds for a classroom.

Join us to find out what makes Snap! special :) </description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='312'>Jadga Hügle</person>
<person id='945'>Brian Harvey</person>
<person id='2'>Michael Ball</person>
<person id='49'>Jens Mönig</person>
<person id='7'>Bernat Romagosa</person>
</persons>
</event>
<event guid='7Bo6hu2tynHxpmNE6n3Y8Q' id='845'>
<date>2025-09-02T14:00:00+02:00</date>
<start>12:00</start>
<duration>01:15</duration>
<room>Plenary Room</room>
<type>Food Break</type>
<language></language>
<slug>845-lunch-break</slug>
<title>Lunch Break</title>
<subtitle>Tuesday</subtitle>
<track></track>
<abstract>Go and find food somewhere, we&#39;ll meet again in 90 minutes for the next session :)</abstract>
<description>Go and find food somewhere, we&#39;ll meet again in 90 minutes for the next session :)</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
</persons>
</event>
<event guid='rUNA1fjzpz8QlADO0Qq1lQ' id='817'>
<date>2025-09-02T15:15:00+02:00</date>
<start>13:15</start>
<duration>00:15</duration>
<room>Plenary Room</room>
<type>Talk</type>
<language></language>
<slug>817-prototyping-workflow-automation-with-snap</slug>
<title>Prototyping Workflow Automation with Snap!</title>
<subtitle>How block-based computational thinking can help</subtitle>
<track></track>
<abstract>The range of workflow automation platforms has been growing recently. Platforms like n8n, Make, or Zapier, as well as environments like Microsoft Power Platform, aim to make workflow automation accessible to any citizen. Each of these platforms presents these concepts as something that starts from scratch, offering alternatives to one another. Having the foundations of computational thinking with a block-based language like Snap_!_ makes understanding these platforms much easier and allows for much more powerful implementation and use.</abstract>
<description>The range of workflow automation platforms has been growing recently. Platforms like n8n, Make, or Zapier, as well as environments like Microsoft Power Platform, aim to make workflow automation accessible to any citizen. Each of these platforms presents these concepts as something that starts from scratch, offering alternatives to one another. Having the foundations of computational thinking with a block-based language like Snap_!_ makes understanding these platforms much easier and allows for much more powerful implementation and use.</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='218'>Xavier Pi</person>
</persons>
</event>
<event guid='xq_Uj4LIXk16BjXkoSun1A' id='752'>
<date>2025-09-02T15:30:00+02:00</date>
<start>13:30</start>
<duration>00:15</duration>
<room>Plenary Room</room>
<type>Talk</type>
<language></language>
<slug>752-plcedu</slug>
<title>PLCedu</title>
<subtitle>Educational PLC with Snap!</subtitle>
<track></track>
<abstract>Explanation of PLCedu Open Source Hardware project using Snap!

Vocational training students use PLCedu as an educational PLC to interact with industrial-level signals (12V/24V digital inputs and outputs, -10V to 10V analog inputs and 0V to 10V analog outputs). They are controlling it using Snap! and Python.

https://www.binefa.com/index.php/IoT-Vertebrae_PLC_Edu</abstract>
<description>Explanation of PLCedu Open Source Hardware project using Snap!

Vocational training students use PLCedu as an educational PLC to interact with industrial-level signals (12V/24V digital inputs and outputs, -10V to 10V analog inputs and 0V to 10V analog outputs). They are controlling it using Snap! and Python.

https://www.binefa.com/index.php/IoT-Vertebrae_PLC_Edu</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='613'>Jordi Binefa</person>
</persons>
</event>
<event guid='X_dIpmhMi76rckRbK0nKlA' id='846'>
<date>2025-09-02T15:45:00+02:00</date>
<start>13:45</start>
<duration>02:00</duration>
<room>Plenary Room</room>
<type>Snap4europe</type>
<language></language>
<slug>846-snap4europe</slug>
<title>Snap4Europe</title>
<subtitle></subtitle>
<track></track>
<abstract>Presentation of the Snap4Europe project and resources: https://snap4europe.eu/</abstract>
<description>Presentation of the Snap4Europe project and resources: https://snap4europe.eu/</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='48'>Jens-Peter Knemeyer</person>
<person id='232'>Nicole Marmé</person>
</persons>
</event>
<event guid='OysegHVnrb_PQUcjNitQug' id='842'>
<date>2025-09-02T17:45:00+02:00</date>
<start>15:45</start>
<duration>00:30</duration>
<room>Plenary Room</room>
<type>Coffee Break</type>
<language></language>
<slug>842-coffee-break</slug>
<title>Coffee Break</title>
<subtitle>Tuesday Afternoon</subtitle>
<track></track>
<abstract>Grab a coffee and enjoy it at the PH Aula or from where you are joining us online. </abstract>
<description>Grab a coffee and enjoy it at the PH Aula or from where you are joining us online. </description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
</persons>
</event>
<event guid='l58Q6efsEaW7b77gLx01bA' id='799'>
<date>2025-09-02T18:15:00+02:00</date>
<start>16:15</start>
<duration>00:15</duration>
<room>Plenary Room</room>
<type>Talk</type>
<language></language>
<slug>799-introducing-digital-twins-in-computer-science-education-solving-rubik-scube-with-snap-and-lego</slug>
<title>Introducing Digital Twins in Computer Science Education: Solving Rubik’sCube® with Snap! and LEGO®</title>
<subtitle></subtitle>
<track></track>
<abstract>This talk presents a promising approach for teaching core computing concepts by combining Snap_!_ and LEGO® robotics in the context of a classic puzzle: the Rubik’sCube®. We use a LEGO® Education SPIKE robot capable of solving a physical Rubik’s Cube, interlaced with a digital twin: a virtual cube model developed in Snap*!* that controls the physical robot instantaneously.

**Concept** **and** **Implementation**

Our system consists of two connected components:

1. A Robot that solves a physical Rubik’s Cube, inspired by the PrimeCube and buildable from a single LEGO® Education Spike Prime set.

2. It’s Digital Twin in Snap!: An interactive, visual representation of the Rubik’s Cube, allowing users to explore and execute solution steps virtually

The distinguishing feature of our approach is the direct linkage between Snap! and the robot using the out-of-the-box serial interface of the LEGO® SPIKE system. Actions performed within the Snap! environment, such as executing algorithms or manipulating the cube are mirrored by the robot, resulting in corresponding changes on the real-world cube. This integration enables the demonstration of digital control over physical processes in an accessible and motivating way.

**Educational Perspectives**

This digital–physical coupling provides a concrete context for exploring several important topics in computer science education:

- Physical Computing: How abstract code and algorithms can directly influence physical devices.

- Algorithm Design and Visualization: Using the Rubik’s Cube as an engaging platform to demonstrate algorithms, state changes, and debugging processes in a visual and tangible way.

- Basics of Machine Learning: The process of scanning the physical cube requires machine learning techniques for accurate color recognition, introducing students to practical applications of artificial intelligence in robotics.

- Modeling and Abstraction: How digital twins can help illustrate the representation and control of real-world systems by computational models.

We will outline the technical setup, demonstrate the interaction between Snap! and the robot and share ideas for how the idea of digital twins can be meaningfully integrated into computer science education.</abstract>
<description>This talk presents a promising approach for teaching core computing concepts by combining Snap_!_ and LEGO® robotics in the context of a classic puzzle: the Rubik’sCube®. We use a LEGO® Education SPIKE robot capable of solving a physical Rubik’s Cube, interlaced with a digital twin: a virtual cube model developed in Snap*!* that controls the physical robot instantaneously.

**Concept** **and** **Implementation**

Our system consists of two connected components:

1. A Robot that solves a physical Rubik’s Cube, inspired by the PrimeCube and buildable from a single LEGO® Education Spike Prime set.

2. It’s Digital Twin in Snap!: An interactive, visual representation of the Rubik’s Cube, allowing users to explore and execute solution steps virtually

The distinguishing feature of our approach is the direct linkage between Snap! and the robot using the out-of-the-box serial interface of the LEGO® SPIKE system. Actions performed within the Snap! environment, such as executing algorithms or manipulating the cube are mirrored by the robot, resulting in corresponding changes on the real-world cube. This integration enables the demonstration of digital control over physical processes in an accessible and motivating way.

**Educational Perspectives**

This digital–physical coupling provides a concrete context for exploring several important topics in computer science education:

- Physical Computing: How abstract code and algorithms can directly influence physical devices.

- Algorithm Design and Visualization: Using the Rubik’s Cube as an engaging platform to demonstrate algorithms, state changes, and debugging processes in a visual and tangible way.

- Basics of Machine Learning: The process of scanning the physical cube requires machine learning techniques for accurate color recognition, introducing students to practical applications of artificial intelligence in robotics.

- Modeling and Abstraction: How digital twins can help illustrate the representation and control of real-world systems by computational models.

We will outline the technical setup, demonstrate the interaction between Snap! and the robot and share ideas for how the idea of digital twins can be meaningfully integrated into computer science education.</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='9857'>Konstantin Oltmann</person>
</persons>
</event>
<event guid='1rZptvLid7DHsqk_e_QioQ' id='816'>
<date>2025-09-02T18:30:00+02:00</date>
<start>16:30</start>
<duration>00:15</duration>
<room>Plenary Room</room>
<type>Talk</type>
<language></language>
<slug>816-thinking-about-coding-introducing-computational-thinking-assistants-in-snap-to-foster-metacognitive-engagement</slug>
<title>Thinking About Coding: Introducing Computational Thinking Assistants in Snap! to Foster Metacognitive Engagement</title>
<subtitle></subtitle>
<track></track>
<abstract>This project, developed by Francesco Ragazzini and Mariabeatrice Starace under the supervision of Prof [Ricci](https://www.unibo.it/sitoweb/a.ricci), explores how Snap! can be transformed into a metacognitive tool for supporting the development of computational thinking beyond the act of coding itself. Inspired by the spirit and ideas of S.Papert, our project leverages Snap! flexibility to shift the focus from simply coding to **thinking about coding**. We aim to empower learners to reflect on their creative processes and support teachers in guiding this reflection.
To this end, we developed a Snap! category: **Computational Thinking Assistants (CTA)**. These are a set of blocks designed to support metacognitive engagement directly within the coding environment. Crucially, each CTA is accompanied by a three-column table detailing relevant **competencies (skills)**, **computational practices**, and **observable behaviors**, providing a structured framework for reflection.
The CTA toolkit currently includes six blocks, each associated with a colored paperclip-themed sprite and a specific computational thinking skill:
- **Decomposer (purple)**: Helps break down a problem into smaller parts and match each to the appropriate category of coding blocks.
- **Debugger (light blue)**: Normalizes error as a natural part of coding and offers strategic hints for identifying and resolving bugs.
- **Iterator (red)**: Encourages incremental development and iterative improvement through trial and error.
- **Quoter (orange)**: Supports the documentation of one’s process and the inclusion of credits, fostering both self-awareness and recognition of others.
- **Generalizator (blue)**: Prompts the user to consider how a solution can be adapted or applied to a broader range of problems or contexts.
- **Abstractor (green)**: Aids in identifying common patterns and creating reusable, higher-level solutions from specific instances.
These blocks can be triggered via sprite interaction or messaging and activate a basic conversational interface designed to simulate reflective questioning. Although currently rule-based, future iterations may explore AI-powered sentiment analysis to tailor interactions more effectively.
At the moment, we have developed an initial prototype of the CTA paper clips and we are conducting the first round of user testing with students and teachers. All interactions with the CTAs are designed and implemented in Italian, but we can certainly translate them into English if our proposal will be accepted. The first prototype is available for viewing at the following links:[vado.li/cta1](http://vado.li/cta1), [vado.li/cta2](http://vado.li/cta2), [vado.li/cta3](http://vado.li/cta3) 
Key challenges include ensuring access to hardware, engaging students who prefer creation over reflection, and encouraging teachers to see coding tools as cross-disciplinary, process-oriented supports rather than just technical resources. To address these, future developments will offer **canvas-based** **templates** and **toolkits** inspired by Service and Game Design, along with a student **Learning Journal** to help design, deliver, assess, and document meaningful computational thinking experiences.
Our project aims to use Snap! as a metacognitive tool to foster and develop computational thinking competencies in a variety of educational settings. If such skills are essential beyond coding itself, then we must provide educators with pedagogical, not just technical, support.</abstract>
<description>This project, developed by Francesco Ragazzini and Mariabeatrice Starace under the supervision of Prof [Ricci](https://www.unibo.it/sitoweb/a.ricci), explores how Snap! can be transformed into a metacognitive tool for supporting the development of computational thinking beyond the act of coding itself. Inspired by the spirit and ideas of S.Papert, our project leverages Snap! flexibility to shift the focus from simply coding to **thinking about coding**. We aim to empower learners to reflect on their creative processes and support teachers in guiding this reflection.
To this end, we developed a Snap! category: **Computational Thinking Assistants (CTA)**. These are a set of blocks designed to support metacognitive engagement directly within the coding environment. Crucially, each CTA is accompanied by a three-column table detailing relevant **competencies (skills)**, **computational practices**, and **observable behaviors**, providing a structured framework for reflection.
The CTA toolkit currently includes six blocks, each associated with a colored paperclip-themed sprite and a specific computational thinking skill:
- **Decomposer (purple)**: Helps break down a problem into smaller parts and match each to the appropriate category of coding blocks.
- **Debugger (light blue)**: Normalizes error as a natural part of coding and offers strategic hints for identifying and resolving bugs.
- **Iterator (red)**: Encourages incremental development and iterative improvement through trial and error.
- **Quoter (orange)**: Supports the documentation of one’s process and the inclusion of credits, fostering both self-awareness and recognition of others.
- **Generalizator (blue)**: Prompts the user to consider how a solution can be adapted or applied to a broader range of problems or contexts.
- **Abstractor (green)**: Aids in identifying common patterns and creating reusable, higher-level solutions from specific instances.
These blocks can be triggered via sprite interaction or messaging and activate a basic conversational interface designed to simulate reflective questioning. Although currently rule-based, future iterations may explore AI-powered sentiment analysis to tailor interactions more effectively.
At the moment, we have developed an initial prototype of the CTA paper clips and we are conducting the first round of user testing with students and teachers. All interactions with the CTAs are designed and implemented in Italian, but we can certainly translate them into English if our proposal will be accepted. The first prototype is available for viewing at the following links:[vado.li/cta1](http://vado.li/cta1), [vado.li/cta2](http://vado.li/cta2), [vado.li/cta3](http://vado.li/cta3) 
Key challenges include ensuring access to hardware, engaging students who prefer creation over reflection, and encouraging teachers to see coding tools as cross-disciplinary, process-oriented supports rather than just technical resources. To address these, future developments will offer **canvas-based** **templates** and **toolkits** inspired by Service and Game Design, along with a student **Learning Journal** to help design, deliver, assess, and document meaningful computational thinking experiences.
Our project aims to use Snap! as a metacognitive tool to foster and develop computational thinking competencies in a variety of educational settings. If such skills are essential beyond coding itself, then we must provide educators with pedagogical, not just technical, support.</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='9940'>Francesco Ragazzini</person>
</persons>
</event>
<event guid='JDd-XcBSgzrg-AxHfyz-ZQ' id='836'>
<date>2025-09-02T18:45:00+02:00</date>
<start>16:45</start>
<duration>00:15</duration>
<room>Plenary Room</room>
<type>Talk</type>
<language></language>
<slug>836-new-means-to-communicate-listen-to-your-written-text-with-snap-get-your-spoken-text-from-snap</slug>
<title>New means to communicate: Listen to your written text with Snap! / Get your spoken text from Snap!</title>
<subtitle></subtitle>
<track></track>
<abstract>Right from the start of Snap! you could communicate with your users/audience using the Say-or Think-Block for displaying data as well as using the Ask-Block for getting data from users. More option were coming with the Write-Block, Costume-from-text-Block. However you only have the option to display data for reading on the scene and receiving data by manually typing into a up-popping input field.

Now with the new block &quot;recognize speech&quot; the speech library give you the option to communicate with your users/audience in a more advanced way: by voice. Using the speak-block you can avoid the say bubbles on the scene. Using the recognize-speech-block to get rid off popping-up ask-blocks.

In this session you will get some inspirations how your application could be like using the speech library. </abstract>
<description>Right from the start of Snap! you could communicate with your users/audience using the Say-or Think-Block for displaying data as well as using the Ask-Block for getting data from users. More option were coming with the Write-Block, Costume-from-text-Block. However you only have the option to display data for reading on the scene and receiving data by manually typing into a up-popping input field.

Now with the new block &quot;recognize speech&quot; the speech library give you the option to communicate with your users/audience in a more advanced way: by voice. Using the speak-block you can avoid the say bubbles on the scene. Using the recognize-speech-block to get rid off popping-up ask-blocks.

In this session you will get some inspirations how your application could be like using the speech library. </description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='596'>Volker Enders</person>
<person id='10587'>Gerd Ruehle</person>
</persons>
</event>
<event guid='xczaIU0L26-EEcmh2ggk5A' id='865'>
<date>2025-09-02T19:00:00+02:00</date>
<start>17:00</start>
<duration>00:15</duration>
<room>Plenary Room</room>
<type>Talk</type>
<language></language>
<slug>865-teach-python-in-turtlestitch</slug>
<title>Teach Python in TurtleStitch</title>
<subtitle></subtitle>
<track></track>
<abstract>This lightning talk is to share my experience of how I teach Python with a hybrid method in TurtleStitch. A Snap! library called Python blocks will be introduced to the audience as a tool for implementing this method.

Python Blocks is a Python syntax block library that provides commonly used Python commands and basic functions. Users can input text variable names, text expressions, block variable names, or block expressions into these commands. The blocks can be used independently to write a fully Python style block-based TurtleStitch program or combined with existing Snap! commands for hybrid programming.

The purpose of this approach is to help students, especially those from non-English-speaking countries, to more intuitively understand the usage of Python commands and functions, enabling them to master the Python language more quickly. Additionally, some commands in Python Blocks can speed up Snap! programming. You can also convert programs written in Python Blocks into actual Python code with a single command.
</abstract>
<description>This lightning talk is to share my experience of how I teach Python with a hybrid method in TurtleStitch. A Snap! library called Python blocks will be introduced to the audience as a tool for implementing this method.

Python Blocks is a Python syntax block library that provides commonly used Python commands and basic functions. Users can input text variable names, text expressions, block variable names, or block expressions into these commands. The blocks can be used independently to write a fully Python style block-based TurtleStitch program or combined with existing Snap! commands for hybrid programming.

The purpose of this approach is to help students, especially those from non-English-speaking countries, to more intuitively understand the usage of Python commands and functions, enabling them to master the Python language more quickly. Additionally, some commands in Python Blocks can speed up Snap! programming. You can also convert programs written in Python Blocks into actual Python code with a single command.
</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='281'>Simon Mong</person>
</persons>
</event>
<event guid='I3VGDHToykBuXplY5v8sqA' id='854'>
<date>2025-09-02T19:15:00+02:00</date>
<start>17:15</start>
<duration>00:10</duration>
<room>Plenary Room</room>
<type>Talks Discussions</type>
<language></language>
<slug>854-talks-discussions</slug>
<title>Talks Discussions</title>
<subtitle>Session 3</subtitle>
<track></track>
<abstract>let&#39;s discuss what we just learned.</abstract>
<description>let&#39;s discuss what we just learned.</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
</persons>
</event>
<event guid='a_X8tjCjFiL-Uik3OwwhQw' id='801'>
<date>2025-09-02T19:25:00+02:00</date>
<start>17:25</start>
<duration>00:15</duration>
<room>Plenary Room</room>
<type>Talk</type>
<language></language>
<slug>801-coding-flowers-in-turtlestitch-from-curves-to-paper-cut-designs</slug>
<title>Coding Flowers in TurtleStitch: From Curves to Paper-Cut Designs</title>
<subtitle></subtitle>
<track></track>
<abstract>Explore how TurtleStitch can be used to code rose curves and petal-based flower designs for paper cutting. This talk showcases how block-based programming brings math, geometry, and digital making together through creative coding to produce beautiful paper flower arrangements.</abstract>
<description>Explore how TurtleStitch can be used to code rose curves and petal-based flower designs for paper cutting. This talk showcases how block-based programming brings math, geometry, and digital making together through creative coding to produce beautiful paper flower arrangements.</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='3285'>Elaine Wolfe</person>
</persons>
</event>
<event guid='lodrdVTN9irLi0VAdbouCA' id='808'>
<date>2025-09-02T19:40:00+02:00</date>
<start>17:40</start>
<duration>00:15</duration>
<room>Plenary Room</room>
<type>Talk</type>
<language></language>
<slug>808-music-world-building-and-ai-integration</slug>
<title>Music, World Building and AI Integration</title>
<subtitle>New Additions to the NetsBlox Ecosystem</subtitle>
<track></track>
<abstract>As computer science teachers seek ways to engage students with real-world, collaborative and cross-disciplinary projects, this talk will introduce exciting new features in NetsBlox, an extension of Snap_!_. NetsBlox enables students to create distributed computing projects using just two simple concepts: Remote Procedure Calls (RPC) for accessing online data and services, and message passing for communication between projects. These abstractions make advanced CS concepts accessible, making engaging real-world projects possible while enabling students to build non-trivial distributed systems like chatrooms and multiplayer games.

We&#39;ll demonstrate four new additions that dramatically expand classroom possibilities:

**BeatBlox** transforms programming into music creation, perfect for interdisciplinary projects. Students can import popular songs and beats and connect keyboards or microphones to blend algorithmic and live music. They can import sheet music directly into code and create digital bands where each computer becomes an instrument playing in perfect synchronization. Imagine students coding their own DJ software or collaboratively composing algorithmic symphonies.

**PhoneIoT** turns any smartphone into a programmable sensor platform and remote controller. NetsBlox projects can access phone sensors like accelerometer and gyroscope and create custom touch interfaces with buttons and joysticks. PhoneIoT makes it possible for students to control their projects through tilting, touching, turning or shaking. The breakthrough feature lets NetsBlox programs run directly on phones, enabling real-time sensor processing. The results of those computations can then be stored in the cloud with RPCs or sent back to another NetsBlox program running on the computer using message passing. Using PhoneIoT and NetsBlox, students can build everything from motion-controlled games to data collection apps for science classes using their own phones.

**RoboScape Online** provides 3D robot simulation where multiple students collaborate or compete using individual robots or even robot teams. The major new feature is worldbuilding - students can design and modify 3D environments directly from NetsBlox code, creating dynamic simulations where obstacles appear randomly or multi-level robot challenges emerge programmatically. This makes robotics even more exciting and accessible to any classroom without expensive hardware.

**AI Integration** connects students to ChatGPT through a simple NetsBlox service, enabling custom chatbots and intelligent game characters. We&#39;ll also preview BloxBuddy, an AI programming tutor that helps students debug code, provides hints, and answers questions in real-time.

These additions address common teacher challenges: engaging reluctant programmers through music and games, connecting CS to other subjects, making mobile development accessible, and providing robotics experiences without hardware costs. Each tool scaffolds from beginner-friendly templates to advanced algorithmic thinking, supporting diverse skill levels within single classrooms.

All features run in-browser with no installation required, making them ideal for typical school environments and maintaining NetsBlox&#39;s core philosophy of making complex concepts simple. The talk will show live projects using each tool, complete with code examples teachers can immediately adapt for their classrooms.


</abstract>
<description>As computer science teachers seek ways to engage students with real-world, collaborative and cross-disciplinary projects, this talk will introduce exciting new features in NetsBlox, an extension of Snap_!_. NetsBlox enables students to create distributed computing projects using just two simple concepts: Remote Procedure Calls (RPC) for accessing online data and services, and message passing for communication between projects. These abstractions make advanced CS concepts accessible, making engaging real-world projects possible while enabling students to build non-trivial distributed systems like chatrooms and multiplayer games.

We&#39;ll demonstrate four new additions that dramatically expand classroom possibilities:

**BeatBlox** transforms programming into music creation, perfect for interdisciplinary projects. Students can import popular songs and beats and connect keyboards or microphones to blend algorithmic and live music. They can import sheet music directly into code and create digital bands where each computer becomes an instrument playing in perfect synchronization. Imagine students coding their own DJ software or collaboratively composing algorithmic symphonies.

**PhoneIoT** turns any smartphone into a programmable sensor platform and remote controller. NetsBlox projects can access phone sensors like accelerometer and gyroscope and create custom touch interfaces with buttons and joysticks. PhoneIoT makes it possible for students to control their projects through tilting, touching, turning or shaking. The breakthrough feature lets NetsBlox programs run directly on phones, enabling real-time sensor processing. The results of those computations can then be stored in the cloud with RPCs or sent back to another NetsBlox program running on the computer using message passing. Using PhoneIoT and NetsBlox, students can build everything from motion-controlled games to data collection apps for science classes using their own phones.

**RoboScape Online** provides 3D robot simulation where multiple students collaborate or compete using individual robots or even robot teams. The major new feature is worldbuilding - students can design and modify 3D environments directly from NetsBlox code, creating dynamic simulations where obstacles appear randomly or multi-level robot challenges emerge programmatically. This makes robotics even more exciting and accessible to any classroom without expensive hardware.

**AI Integration** connects students to ChatGPT through a simple NetsBlox service, enabling custom chatbots and intelligent game characters. We&#39;ll also preview BloxBuddy, an AI programming tutor that helps students debug code, provides hints, and answers questions in real-time.

These additions address common teacher challenges: engaging reluctant programmers through music and games, connecting CS to other subjects, making mobile development accessible, and providing robotics experiences without hardware costs. Each tool scaffolds from beginner-friendly templates to advanced algorithmic thinking, supporting diverse skill levels within single classrooms.

All features run in-browser with no installation required, making them ideal for typical school environments and maintaining NetsBlox&#39;s core philosophy of making complex concepts simple. The talk will show live projects using each tool, complete with code examples teachers can immediately adapt for their classrooms.


</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='300'>Akos Ledeczi</person>
</persons>
</event>
<event guid='j3tLa6bbVTozbQCNugAsXw' id='806'>
<date>2025-09-02T19:55:00+02:00</date>
<start>17:55</start>
<duration>00:15</duration>
<room>Plenary Room</room>
<type>Talk</type>
<language></language>
<slug>806-snaptranslatase</slug>
<title>SnapTranslatase</title>
<subtitle></subtitle>
<track></track>
<abstract>SnapTranslatase is a user-friendly way for anyone to learn more about an RNA sequence they find themselves with. Provided the user has an RNA sequence they obtained, through, say RNA-seq, SnapTranslatase will allow them to not only learn the amino acids encoded for by the sequence, not only learn said amino acid sequence in FASTA format (an amino acid sequence format commonly used), but also learn the protein encoded by the RNA sequence and what species is most likely to synthesize said protein version. </abstract>
<description>SnapTranslatase is a user-friendly way for anyone to learn more about an RNA sequence they find themselves with. Provided the user has an RNA sequence they obtained, through, say RNA-seq, SnapTranslatase will allow them to not only learn the amino acids encoded for by the sequence, not only learn said amino acid sequence in FASTA format (an amino acid sequence format commonly used), but also learn the protein encoded by the RNA sequence and what species is most likely to synthesize said protein version. </description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='9891'>SnapTranslatase</person>
</persons>
</event>
<event guid='hAkKNaJrGF2NX7LhIlTlUA' id='855'>
<date>2025-09-02T20:10:00+02:00</date>
<start>18:10</start>
<duration>00:10</duration>
<room>Plenary Room</room>
<type>Talks Discussions</type>
<language></language>
<slug>855-talks-discussions</slug>
<title>Talks Discussions</title>
<subtitle>Session 4</subtitle>
<track></track>
<abstract>let&#39;s discuss what we just learned.</abstract>
<description>let&#39;s discuss what we just learned.</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
</persons>
</event>
<event guid='zxw0C5MnEWzMMnkcKNNxbg' id='824'>
<date>2025-09-02T20:20:00+02:00</date>
<start>18:20</start>
<duration>01:00</duration>
<room>Plenary Room</room>
<type>Keynote</type>
<language></language>
<slug>824-creativity-context-and-community</slug>
<title>Creativity, Context, and Community</title>
<subtitle></subtitle>
<track></track>
<abstract>Margaret will tell us all about her experience using Snap! at Warwick University.</abstract>
<description>Margaret will tell us all about her experience using Snap! at Warwick University.</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='391'>Margaret Low</person>
</persons>
</event>
</room>
<room name='Seminar Room 1'>
<event guid='i8Bmsi57Fhjjwa8aB4IuPw' id='849'>
<date>2025-09-02T12:30:00+02:00</date>
<start>10:30</start>
<duration>01:30</duration>
<room>Seminar Room 1</room>
<type>Workshop</type>
<language></language>
<slug>849-introduction-to-turtlestitch</slug>
<title>Introduction to TurtleStitch</title>
<subtitle>Making Programming Tangible</subtitle>
<track></track>
<abstract>TurtleStitch (https://turtlestitch.org) is a fork of Snap! that lets you export files for digital embroidery machines, laser cutters or cutting plotters. 
In this workshop, we will give an introduction to TurtleStitch, show its advantages and limitations in maker and school contexts and create our own designs that can later be embroidered on our machines. 

Join us to find out how to transfer your code to the real world. </abstract>
<description>TurtleStitch (https://turtlestitch.org) is a fork of Snap! that lets you export files for digital embroidery machines, laser cutters or cutting plotters. 
In this workshop, we will give an introduction to TurtleStitch, show its advantages and limitations in maker and school contexts and create our own designs that can later be embroidered on our machines. 

Join us to find out how to transfer your code to the real world. </description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='312'>Jadga Hügle</person>
</persons>
</event>
</room>
</day>
<day date='2025-09-03' index='3'>
<room name='Plenary Room'>
<event guid='foFi-PujVGXF_LEGnZmY2g' id='840'>
<date>2025-09-03T12:00:00+02:00</date>
<start>10:00</start>
<duration>00:30</duration>
<room>Plenary Room</room>
<type>Coffee Break</type>
<language></language>
<slug>840-coffee-break</slug>
<title>Coffee Break</title>
<subtitle>Wednesday Morning</subtitle>
<track></track>
<abstract>Let&#39;s meet at the PH Aula for a coffee before the conference day starts.</abstract>
<description>Let&#39;s meet at the PH Aula for a coffee before the conference day starts.</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
</persons>
</event>
<event guid='VMCQmI7eiQfYTnF4dSYNXg' id='811'>
<date>2025-09-03T12:30:00+02:00</date>
<start>10:30</start>
<duration>01:30</duration>
<room>Plenary Room</room>
<type>Workshop</type>
<language></language>
<slug>811-breaking-the-ceiling</slug>
<title>Breaking the Ceiling</title>
<subtitle>… of Traditional Algorithmic Thinking</subtitle>
<track></track>
<abstract>Our algorithmics lessons are still influenced by the pioneering days of the computer age, by a limited memory and the use of static data structures, although modern PCs only only simulate these. Students learning to program with Snap! no longer use records and arrays, but indexed lists as the basic data type. They develop new solution strategies, such as the increased use of methods with return values (reporters) and matrices/tables. These solutions pave the way for the transition to new concepts such as functional programming and piping.
Examples such as:
- Symmetric cryptographic methods
- Implementation of finite automata
- Sorting methods
- Dealing with multidimensional data structures
- Pattern recognition in texts
are used to demonstrate how traditional algorithmic solutions can be modified and improved.
</abstract>
<description>Our algorithmics lessons are still influenced by the pioneering days of the computer age, by a limited memory and the use of static data structures, although modern PCs only only simulate these. Students learning to program with Snap! no longer use records and arrays, but indexed lists as the basic data type. They develop new solution strategies, such as the increased use of methods with return values (reporters) and matrices/tables. These solutions pave the way for the transition to new concepts such as functional programming and piping.
Examples such as:
- Symmetric cryptographic methods
- Implementation of finite automata
- Sorting methods
- Dealing with multidimensional data structures
- Pattern recognition in texts
are used to demonstrate how traditional algorithmic solutions can be modified and improved.
</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='9917'>Fritz Hasselhorn</person>
<person id='1352'>Fhasselhorn</person>
</persons>
</event>
<event guid='CEIyNQ3I00mgZLQrbmTBHw' id='847'>
<date>2025-09-03T14:00:00+02:00</date>
<start>12:00</start>
<duration>01:15</duration>
<room>Plenary Room</room>
<type>Food Break</type>
<language></language>
<slug>847-lunch-break</slug>
<title>Lunch Break</title>
<subtitle>Wednesday</subtitle>
<track></track>
<abstract>Go and find food somewhere, we&#39;ll meet again in 90 minutes for the next session :)</abstract>
<description>Go and find food somewhere, we&#39;ll meet again in 90 minutes for the next session :)</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
</persons>
</event>
<event guid='6F_9Ww78VK_5nd567a2i-w' id='834'>
<date>2025-09-03T15:15:00+02:00</date>
<start>13:15</start>
<duration>01:30</duration>
<room>Plenary Room</room>
<type>Workshop</type>
<language></language>
<slug>834-how-to-evaluate-student-work-a-competency-rubric-for-snap-projects</slug>
<title>How to evaluate student work - a competency rubric for Snap! projects</title>
<subtitle></subtitle>
<track></track>
<abstract>The evaluation of individual student projects objectively remains challenging due to a paucity of suitable tools. Block-based programming languages such as Snap! are extensively utilised in the pedagogy of computer science basics and the facilitation of project creation. The present study developed and refined a rubric for evaluating Snap! projects, based on a dataset of 36 student projects over three years.
Following a thorough review and subsequent revision by experts in the field, the rubric was subjected to a test process involving prospective teachers. These teachers evaluated a test database comprising ten additional projects, with and without the tool in question. The findings demonstrate that the rubric significantly enhances assessment consistency and facilitates more precise differentiation of project quality. The rubric facilitates a more precise evaluation of programming competencies and offers educators a valuable resource for assessing block-based programming work.

In this workshop we&#39;ll familiarize you with the competency rubric, try it on different projects and discuss its applicability with you. </abstract>
<description>The evaluation of individual student projects objectively remains challenging due to a paucity of suitable tools. Block-based programming languages such as Snap! are extensively utilised in the pedagogy of computer science basics and the facilitation of project creation. The present study developed and refined a rubric for evaluating Snap! projects, based on a dataset of 36 student projects over three years.
Following a thorough review and subsequent revision by experts in the field, the rubric was subjected to a test process involving prospective teachers. These teachers evaluated a test database comprising ten additional projects, with and without the tool in question. The findings demonstrate that the rubric significantly enhances assessment consistency and facilitates more precise differentiation of project quality. The rubric facilitates a more precise evaluation of programming competencies and offers educators a valuable resource for assessing block-based programming work.

In this workshop we&#39;ll familiarize you with the competency rubric, try it on different projects and discuss its applicability with you. </description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='48'>Jens-Peter Knemeyer</person>
<person id='211'>Alexandra Abramova</person>
<person id='1564'>Alexandra Abramova</person>
</persons>
</event>
<event guid='E7UIPrm_YSHOP3bSWGMztA' id='843'>
<date>2025-09-03T16:45:00+02:00</date>
<start>14:45</start>
<duration>00:30</duration>
<room>Plenary Room</room>
<type>Coffee Break</type>
<language></language>
<slug>843-coffee-break</slug>
<title>Coffee Break</title>
<subtitle>Wednesday Afternoon</subtitle>
<track></track>
<abstract>Grab a coffee and enjoy it at the PH Aula or from where you are joining us online. </abstract>
<description>Grab a coffee and enjoy it at the PH Aula or from where you are joining us online. </description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
</persons>
</event>
<event guid='3287VZ1Ga5ihLFWgGt59Ww' id='809'>
<date>2025-09-03T17:15:00+02:00</date>
<start>15:15</start>
<duration>00:15</duration>
<room>Plenary Room</room>
<type>Talk</type>
<language></language>
<slug>809-visual-programming-languages-kickboard-or-inflatable-swimmies</slug>
<title>Visual Programming Languages:  Kickboard or Inflatable Swimmies?</title>
<subtitle>A Provocative Discussion of How To Learn to Code</subtitle>
<track></track>
<abstract>Visual Programming Languages now have a 20 year history.  They support an introduction to the fundamentals of traditional programming. In 2008 Jane Margolis published &quot;Stuck in the Shallow End&quot; in which she asserted that young people of color, especially girls, were stuck in the shallow end of the pool when it came to learning to code. The analogy was drawn from the very real phenomenon back then, that people of color were unable to gain access to good swimming instruction despite the best efforts of organizations like the YMCA and Swim America. I&#39;ve promoted &quot;coding as swimming&quot; for almost 50 years: helping people of all ages and sizes have the courage to stick their toe in the water. I hold them as they put their face in and float, coach them to develop a strong and efficient stroke, and to build endurance doing laps.  But in academic settings I’ve rarely had the opportunity to introduce them to the open water of lakes and seas where swimming against the current or the glory of snorkeling makes it all worthwhile. Visual languages have been an essential tool, like a kickboard, that can reliably help you build skills and endurnce. But the coding environment is not a teacher who instructs on water safety in open water. The standard initial curriculum isn’t exactly throwing the novice in a deep river (which is how my father learned to swim near Heidelberg over a hundred years ago). I have come to question whether the way we teach and what we teach is akin to putting on inflatable arm floatation devices (called “swimmies” in the US).  The resources available through crowd sourced ‘remixing’, or Conversational AI encourage swimming in the deep end with puncture prone support. Through two examples, one from Snap/Turtlestitch, and one from Quilt Design I want to ignite a  passionate and thoughtful discussion about my hypothesis. Please note: A Bennington Library Summer offering &quot;A Taste of Craft: Kids in grades 3 through 6 are invited to try crocheting, quilting, and embroidery by hand and coding&quot; both upended and re-affirmed my hypothesis, especially computer mediated instruction.</abstract>
<description>Visual Programming Languages now have a 20 year history.  They support an introduction to the fundamentals of traditional programming. In 2008 Jane Margolis published &quot;Stuck in the Shallow End&quot; in which she asserted that young people of color, especially girls, were stuck in the shallow end of the pool when it came to learning to code. The analogy was drawn from the very real phenomenon back then, that people of color were unable to gain access to good swimming instruction despite the best efforts of organizations like the YMCA and Swim America. I&#39;ve promoted &quot;coding as swimming&quot; for almost 50 years: helping people of all ages and sizes have the courage to stick their toe in the water. I hold them as they put their face in and float, coach them to develop a strong and efficient stroke, and to build endurance doing laps.  But in academic settings I’ve rarely had the opportunity to introduce them to the open water of lakes and seas where swimming against the current or the glory of snorkeling makes it all worthwhile. Visual languages have been an essential tool, like a kickboard, that can reliably help you build skills and endurnce. But the coding environment is not a teacher who instructs on water safety in open water. The standard initial curriculum isn’t exactly throwing the novice in a deep river (which is how my father learned to swim near Heidelberg over a hundred years ago). I have come to question whether the way we teach and what we teach is akin to putting on inflatable arm floatation devices (called “swimmies” in the US).  The resources available through crowd sourced ‘remixing’, or Conversational AI encourage swimming in the deep end with puncture prone support. Through two examples, one from Snap/Turtlestitch, and one from Quilt Design I want to ignite a  passionate and thoughtful discussion about my hypothesis. Please note: A Bennington Library Summer offering &quot;A Taste of Craft: Kids in grades 3 through 6 are invited to try crocheting, quilting, and embroidery by hand and coding&quot; both upended and re-affirmed my hypothesis, especially computer mediated instruction.</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='2191'>Ursula Wolz</person>
</persons>
</event>
<event guid='a6H-zQp6v6NgZo5PfCooKw' id='830'>
<date>2025-09-03T17:30:00+02:00</date>
<start>15:30</start>
<duration>00:15</duration>
<room>Plenary Room</room>
<type>Talk</type>
<language></language>
<slug>830-ten-years-of-turtlestitch-reflections-from-the-anniversary-event</slug>
<title>Ten Years of TurtleStitch - Reflections from the Anniversary Event</title>
<subtitle>What We Celebrated, Learned, and Envisioned at TurtleStitch10</subtitle>
<track></track>
<abstract>This summer, TurtleStitch turns ten. To mark the occasion, we are hosting the TurtleStitch10 Fest in Tilburg (NL), bringing together educators, artists, coders, and makers to celebrate a decade of creative coding and machine embroidery. Over two days, participants will share their practices, projects, and pedagogical approaches--from classroom experiences and hybrid fabrication methods to algorithmic design and textile art.

By the time of this talk, the Fest will have taken place, and we will share highlights and insights from the event: what was presented, what sparked discussion, and what new ideas emerged. The talk will conclude with an outlook on upcoming developments--some already underway within the TurtleStitch community, and others shaped by ongoing collaboration with the Snap! team</abstract>
<description>This summer, TurtleStitch turns ten. To mark the occasion, we are hosting the TurtleStitch10 Fest in Tilburg (NL), bringing together educators, artists, coders, and makers to celebrate a decade of creative coding and machine embroidery. Over two days, participants will share their practices, projects, and pedagogical approaches--from classroom experiences and hybrid fabrication methods to algorithmic design and textile art.

By the time of this talk, the Fest will have taken place, and we will share highlights and insights from the event: what was presented, what sparked discussion, and what new ideas emerged. The talk will conclude with an outlook on upcoming developments--some already underway within the TurtleStitch community, and others shaped by ongoing collaboration with the Snap! team</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='93'>Andrea Mayr-Stalder</person>
<person id='9'>Joek van Montfort</person>
</persons>
</event>
<event guid='0cYy9VP5kxb9h3al5U2Fqw' id='814'>
<date>2025-09-03T17:45:00+02:00</date>
<start>15:45</start>
<duration>00:15</duration>
<room>Plenary Room</room>
<type>Talk</type>
<language></language>
<slug>814-a-building-kit-for-artificial-neural-networks</slug>
<title>A Building Kit for Artificial Neural Networks</title>
<subtitle></subtitle>
<track></track>
<abstract>Neural networks and deep learning are an important part of current artificial intelligence. Therefore, we want people to understand what&#39;s happening under the hood. We&#39;ve used a single ‘layer’ sprite to create a classic Rosenblatt perceptron. Duplicate the layer sprite multiple times and customise the receivers in the transmission blocks to create deep neural networks. Use the setup script to customise the topology and learning rate of the network. Experiment with exciting data sets, investigate the effects of hyperparameters and observe the data stream processing of the deep neural network. Watch how the network learns by processing the error stream during backpropagation.</abstract>
<description>Neural networks and deep learning are an important part of current artificial intelligence. Therefore, we want people to understand what&#39;s happening under the hood. We&#39;ve used a single ‘layer’ sprite to create a classic Rosenblatt perceptron. Duplicate the layer sprite multiple times and customise the receivers in the transmission blocks to create deep neural networks. Use the setup script to customise the topology and learning rate of the network. Experiment with exciting data sets, investigate the effects of hyperparameters and observe the data stream processing of the deep neural network. Watch how the network learns by processing the error stream during backpropagation.</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='2159'>Uwe Lorenz</person>
<person id='49'>Jens Mönig</person>
<person id='312'>Jadga Hügle</person>
</persons>
</event>
<event guid='0okK_iWtFdSOREIez72XHw' id='798'>
<date>2025-09-03T18:00:00+02:00</date>
<start>16:00</start>
<duration>00:15</duration>
<room>Plenary Room</room>
<type>Talk</type>
<language></language>
<slug>798-snap-3d-printed-microscope</slug>
<title>Snap! 3D-Printed Microscope</title>
<subtitle></subtitle>
<track></track>
<abstract>The Snap! 3D-Printed Microscope is designed to enable students to explore automated microscopy. It can be fabricated for $50 but uses the same optical lenses as commercial microscopes. Therefore, the quality of images captured using the Snap! Microscope are comparable to those acquired with commercial school microscopes. 

A school with a maker space can conserve scarce resources by fabricating microscopes for its science classes. The microscope is currently being piloted at sites in Uganda and Kenya where access to commercial microscopes is limited.

Autofocus. An autofocus program developed in Snap! by a high school student working in collaboration with the developers uses edge detection to determine when the image is in focus. This information is communicated to a program on the microcontroller connected to the focus control. The microcontroller program, implemented in MicroBlocks, uses the microcontroller to turn the focus control in the appropriate direction. Based on the results, the Snap! program makes appropriate adjustments until the slide is in focus.

Automated Identification. A team of computer science students at the University of Virginia are developing extensions that employ artificial intelligence to facilitate identification of specimens. This feature will be used to support comparison specimens collected across different biomes at schools in different geographic regions. 

The CAD files for fabrication of the microscope are available in the Educational CAD Model Library, an open-source peer-reviewed repository for schools. The Snap! software application will enable students and teachers to examine and modify the software to create their own extensions and enhancements. 
</abstract>
<description>The Snap! 3D-Printed Microscope is designed to enable students to explore automated microscopy. It can be fabricated for $50 but uses the same optical lenses as commercial microscopes. Therefore, the quality of images captured using the Snap! Microscope are comparable to those acquired with commercial school microscopes. 

A school with a maker space can conserve scarce resources by fabricating microscopes for its science classes. The microscope is currently being piloted at sites in Uganda and Kenya where access to commercial microscopes is limited.

Autofocus. An autofocus program developed in Snap! by a high school student working in collaboration with the developers uses edge detection to determine when the image is in focus. This information is communicated to a program on the microcontroller connected to the focus control. The microcontroller program, implemented in MicroBlocks, uses the microcontroller to turn the focus control in the appropriate direction. Based on the results, the Snap! program makes appropriate adjustments until the slide is in focus.

Automated Identification. A team of computer science students at the University of Virginia are developing extensions that employ artificial intelligence to facilitate identification of specimens. This feature will be used to support comparison specimens collected across different biomes at schools in different geographic regions. 

The CAD files for fabrication of the microscope are available in the Educational CAD Model Library, an open-source peer-reviewed repository for schools. The Snap! software application will enable students and teachers to examine and modify the software to create their own extensions and enhancements. 
</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='115'>Glen Bull</person>
</persons>
</event>
<event guid='SMh6gJ1ZCSUfW0dz_zIV2A' id='857'>
<date>2025-09-03T18:15:00+02:00</date>
<start>16:15</start>
<duration>00:10</duration>
<room>Plenary Room</room>
<type>Talks Discussions</type>
<language></language>
<slug>857-talks-discussions</slug>
<title>Talks Discussions </title>
<subtitle>session 5</subtitle>
<track></track>
<abstract>let&#39;s discuss what we just learned.</abstract>
<description>let&#39;s discuss what we just learned.</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
</persons>
</event>
<event guid='8hDgjieUQZ49Tzjderw__Q' id='797'>
<date>2025-09-03T18:25:00+02:00</date>
<start>16:25</start>
<duration>00:30</duration>
<room>Plenary Room</room>
<type>Birds of a Feather (BoF)</type>
<language></language>
<slug>797-physical-computing</slug>
<title>Physical Computing</title>
<subtitle>Connections to the Physical World using the Snap! MicroBlocks Library</subtitle>
<track></track>
<abstract>There have been notable advances in Snap! projects that interact with the physical world during the past year. The addition of the MicroBlocks library to Snap! has facilitated this work. This library extends the Snap! Broadcast block so that it can communicate with sensors, motors, and actuators connected to microcontrollers. Provision of a standard built-in method in Snap! for communicating with microcontrollers facilitates the process of disseminating microcontroller-based projects.

This capability has enabled students to build custom game controllers by connecting a joystick and buttons to Snap! via a microcontroller. Once a MicroBlocks script linking the microcontroller to Snap! has been created, students can focus on building a game in Snap! with control of every aspect of the system, from the hardware through the game logic. Peter Mathijssen is exploring use of gestures and voice to control graphics and music. This can facilitate creative endeavors but also been useful in extending the capabilities of people with disabilities. Special education teachers enrolled in a maker class at the University of Virginia, for example, used this capability to create augmentative communication systems for their patients. 

This “Birds of a Feature” session will provide an opportunity to share physical computing projects created during the past year and to discuss potential new directions and possibilities.
</abstract>
<description>There have been notable advances in Snap! projects that interact with the physical world during the past year. The addition of the MicroBlocks library to Snap! has facilitated this work. This library extends the Snap! Broadcast block so that it can communicate with sensors, motors, and actuators connected to microcontrollers. Provision of a standard built-in method in Snap! for communicating with microcontrollers facilitates the process of disseminating microcontroller-based projects.

This capability has enabled students to build custom game controllers by connecting a joystick and buttons to Snap! via a microcontroller. Once a MicroBlocks script linking the microcontroller to Snap! has been created, students can focus on building a game in Snap! with control of every aspect of the system, from the hardware through the game logic. Peter Mathijssen is exploring use of gestures and voice to control graphics and music. This can facilitate creative endeavors but also been useful in extending the capabilities of people with disabilities. Special education teachers enrolled in a maker class at the University of Virginia, for example, used this capability to create augmentative communication systems for their patients. 

This “Birds of a Feature” session will provide an opportunity to share physical computing projects created during the past year and to discuss potential new directions and possibilities.
</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='115'>Glen Bull</person>
</persons>
</event>
<event guid='JRypwASTpr7TzHxEf2-RbA' id='820'>
<date>2025-09-03T18:55:00+02:00</date>
<start>16:55</start>
<duration>00:30</duration>
<room>Plenary Room</room>
<type>Birds of a Feather (BoF)</type>
<language></language>
<slug>820-teaching-ai-in-middle-school-and-high-school-using-snap</slug>
<title>Teaching AI in Middle School and High School using Snap!</title>
<subtitle></subtitle>
<track></track>
<abstract>The BJC Sparks team have spent the summer of 2025 thinking about how to take all the amazing AI demos we&#39;ve recently seen and incorporate straightaway or boil them down into middle-or-high-school-appropriate labs and activities. We&#39;d love to hear from others who have been doing this! This will be a chance for all participants to share equally about their activities in the same space, and to hear successes and challenges. 

Here is the slide deck from the presentation with links on the last page: https://docs.google.com/presentation/d/1574NRLUgbN9oRZ0CZCht0OwNZNNXVw8cdC0B_wl0j9g/edit?usp=sharing</abstract>
<description>The BJC Sparks team have spent the summer of 2025 thinking about how to take all the amazing AI demos we&#39;ve recently seen and incorporate straightaway or boil them down into middle-or-high-school-appropriate labs and activities. We&#39;d love to hear from others who have been doing this! This will be a chance for all participants to share equally about their activities in the same space, and to hear successes and challenges. 

Here is the slide deck from the presentation with links on the last page: https://docs.google.com/presentation/d/1574NRLUgbN9oRZ0CZCht0OwNZNNXVw8cdC0B_wl0j9g/edit?usp=sharing</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='317'>Dan Garcia</person>
<person id='287'>Mary Fries</person>
</persons>
</event>
<event guid='fwQ-8X20dT7BTficAwHd_A' id='818'>
<date>2025-09-03T19:25:00+02:00</date>
<start>17:25</start>
<duration>00:30</duration>
<room>Plenary Room</room>
<type>Birds of a Feather (BoF)</type>
<language></language>
<slug>818-introducing-s4d-snaplets</slug>
<title>Introducing S4D Snaplets</title>
<subtitle>integrating snap microworlds in moodle or other LMS in order to build shareable learning-paths combined with a curating system for teachers </subtitle>
<track></track>
<abstract>The Call for Snap!Con 2025 proposes to &#39;share with us your insights and materials, spread the wisdom, the gimmicks and the awesomeness!&#39;


As an observer of the snap! community almost from the beginning I do know there are tons of world class materials not only for CS/programming but also for various didactical domains including mathematics, music, mechanics, robotics and many others. Yet &#39;sharing&#39; differs from &#39;spreading&#39; the wisdom into everyday school life. 


This BoF proposes some steps how to spread the already manifested didactical wisdom rather than generate new. This is about strategy, not about showcasing.


The microworld approach has opened the option to build closed didactical pieces. In the Snap!-Universe these are meant to be shared. At the same time - there may be demand to bundle a sequence of microworlds.
This talk proposes to use moodle i.e. Snap!inMoodle (or any other LMS) for this purpose. i.e as an authoring tool for interactive didactive content. 
Moodle Courses can be easily copied thus giving everybody the option to use existing learning paths while being able to modify the path as well as single elements of the path.
The elements are proposed to be called S4D Snaplets - Snap! for Didactics - as compared to H5P which is widely used to build interactive content not only in the moodle world. The interactive capabilities of H5P are bound to the H5P content type leaving large room for advanced didactics using the snap! programming environment.



Furthermore it is proposed to use the existing Snap!Cloud community software to start a parallel universe where teachers give credit to either snaplets or learning paths as curators thus leading to a guided system. The latter solves the problem of choosing quality content out of the supposingly enormous number of potential snaplets and learning paths.</abstract>
<description>The Call for Snap!Con 2025 proposes to &#39;share with us your insights and materials, spread the wisdom, the gimmicks and the awesomeness!&#39;


As an observer of the snap! community almost from the beginning I do know there are tons of world class materials not only for CS/programming but also for various didactical domains including mathematics, music, mechanics, robotics and many others. Yet &#39;sharing&#39; differs from &#39;spreading&#39; the wisdom into everyday school life. 


This BoF proposes some steps how to spread the already manifested didactical wisdom rather than generate new. This is about strategy, not about showcasing.


The microworld approach has opened the option to build closed didactical pieces. In the Snap!-Universe these are meant to be shared. At the same time - there may be demand to bundle a sequence of microworlds.
This talk proposes to use moodle i.e. Snap!inMoodle (or any other LMS) for this purpose. i.e as an authoring tool for interactive didactive content. 
Moodle Courses can be easily copied thus giving everybody the option to use existing learning paths while being able to modify the path as well as single elements of the path.
The elements are proposed to be called S4D Snaplets - Snap! for Didactics - as compared to H5P which is widely used to build interactive content not only in the moodle world. The interactive capabilities of H5P are bound to the H5P content type leaving large room for advanced didactics using the snap! programming environment.



Furthermore it is proposed to use the existing Snap!Cloud community software to start a parallel universe where teachers give credit to either snaplets or learning paths as curators thus leading to a guided system. The latter solves the problem of choosing quality content out of the supposingly enormous number of potential snaplets and learning paths.</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='604'>Uwe Geisler</person>
</persons>
</event>
<event guid='9WbnCxD_nTfvem5aV6CNoA' id='858'>
<date>2025-09-03T19:55:00+02:00</date>
<start>17:55</start>
<duration>00:10</duration>
<room>Plenary Room</room>
<type>Talks Discussions</type>
<language></language>
<slug>858-bof-discussions</slug>
<title>BOF Discussions</title>
<subtitle></subtitle>
<track></track>
<abstract>10 minute buffer to finish up your BoF discussions. </abstract>
<description>10 minute buffer to finish up your BoF discussions. </description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
</persons>
</event>
<event guid='mnsFmmS9j3SgKoMI2D8Fuw' id='831'>
<date>2025-09-03T20:05:00+02:00</date>
<start>18:05</start>
<duration>01:00</duration>
<room>Plenary Room</room>
<type>Keynote</type>
<language></language>
<slug>831-what-s-new-in-snap-11</slug>
<title>What&#39;s new in Snap! 11</title>
<subtitle></subtitle>
<track></track>
<abstract>Jens will tell us all about Snap! 11.</abstract>
<description>Jens will tell us all about Snap! 11.</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='49'>Jens Mönig</person>
</persons>
</event>
<event guid='DC5A0J4fk4xfzek4kwxCtw' id='851'>
<date>2025-09-03T21:05:00+02:00</date>
<start>19:05</start>
<duration>00:30</duration>
<room>Plenary Room</room>
<type>Plenary</type>
<language></language>
<slug>851-goodbye-snap-con-2025</slug>
<title>Goodbye Snap!Con 2025</title>
<subtitle></subtitle>
<track></track>
<abstract>as all fun things, Snap!Con 2025 has to come to an end.
Let&#39;s share our highlights and say goodbye to each other. </abstract>
<description>as all fun things, Snap!Con 2025 has to come to an end.
Let&#39;s share our highlights and say goodbye to each other. </description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='312'>Jadga Hügle</person>
</persons>
</event>
</room>
<room name='Seminar Room 1'>
<event guid='UJlJbivcaTkTbtrXy0TmIg' id='832'>
<date>2025-09-03T12:30:00+02:00</date>
<start>10:30</start>
<duration>01:30</duration>
<room>Seminar Room 1</room>
<type>Workshop</type>
<language></language>
<slug>832-how-do-we-say-our-code-out-loud</slug>
<title>How do we say our code out loud?</title>
<subtitle></subtitle>
<track></track>
<abstract>Jigsaw programming is very visual but this doesn&#39;t help us to say it out loud - how should we? It is a known problem that people are diverse in the way they read out loud a computer program, and that this may indicate a problem regarding their understanding the program. This workshop will demonstrate an early prototype of a Snap! microworld with limited commands and using turtle graphics to help the development of a &#39;speaking&#39; literacy with regard to Snap! programs. Participants who enjoy designing or are interested in pedagogy for early programmers will practically take the prototype further in small groups or as individuals to explore the ideas and improve the practice.</abstract>
<description>Jigsaw programming is very visual but this doesn&#39;t help us to say it out loud - how should we? It is a known problem that people are diverse in the way they read out loud a computer program, and that this may indicate a problem regarding their understanding the program. This workshop will demonstrate an early prototype of a Snap! microworld with limited commands and using turtle graphics to help the development of a &#39;speaking&#39; literacy with regard to Snap! programs. Participants who enjoy designing or are interested in pedagogy for early programmers will practically take the prototype further in small groups or as individuals to explore the ideas and improve the practice.</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='894'>Richard Millwood</person>
</persons>
</event>
<event guid='oFuwxPP_HMm2Se7oLdadcQ' id='899'>
<date>2025-09-03T15:15:00+02:00</date>
<start>13:15</start>
<duration>01:30</duration>
<room>Seminar Room 1</room>
<type>Workshop</type>
<language></language>
<slug>899-snapping-rags-gen-ai-blocks</slug>
<title>Snapping RAGs - Gen AI + Blocks - </title>
<subtitle>Chat with the workshop leads while they set up and demo</subtitle>
<track></track>
<abstract>Since we have an open slot due to a workshop cancellation, you&#39;ll be able to drop by (in the room in person) while the workshop leads are setting up for their workshop in the later afternoon.

Feel free to exchange with them and ask for demos :)

Online folks feel free to join for the actual workshop later: https://www.snapcon.org/conferences/2025/program/proposals/804 </abstract>
<description>Since we have an open slot due to a workshop cancellation, you&#39;ll be able to drop by (in the room in person) while the workshop leads are setting up for their workshop in the later afternoon.

Feel free to exchange with them and ask for demos :)

Online folks feel free to join for the actual workshop later: https://www.snapcon.org/conferences/2025/program/proposals/804 </description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='9886'>Littlerobot</person>
</persons>
</event>
<event guid='SlHmb-YUsO7D2cHCJ4Dveg' id='804'>
<date>2025-09-03T18:25:00+02:00</date>
<start>16:25</start>
<duration>01:30</duration>
<room>Seminar Room 1</room>
<type>Workshop</type>
<language></language>
<slug>804-snapping-rags-gen-ai-blocks</slug>
<title>Snapping RAGs - Gen AI + Blocks</title>
<subtitle>An exploration of Retrieval-Augmented Generation and Snap activites</subtitle>
<track></track>
<abstract>Gen AI presents new challenges and opportunities for creative coding and education. While many of us love computer science, loops, and logic, even with Snap, the beauty of coding can be daunting to get started in at times. Working with local and online LLMs constrained by Retrieval-Augmented Generation, we can empower ourselves and others to explore idea spaces in new and thrilling ways. </abstract>
<description>Gen AI presents new challenges and opportunities for creative coding and education. While many of us love computer science, loops, and logic, even with Snap, the beauty of coding can be daunting to get started in at times. Working with local and online LLMs constrained by Retrieval-Augmented Generation, we can empower ourselves and others to explore idea spaces in new and thrilling ways. </description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='9886'>Littlerobot</person>
</persons>
</event>
</room>
<room name='Seminar Room 2'>
<event guid='yNBuFARPYxhP23dme2lYDA' id='753'>
<date>2025-09-03T18:25:00+02:00</date>
<start>16:25</start>
<duration>00:30</duration>
<room>Seminar Room 2</room>
<type>Birds of a Feather (BoF)</type>
<language></language>
<slug>753-turtlestitch-at-ten</slug>
<title>TurtleStitch at Ten</title>
<subtitle>A gathering of digital embroidery folk to celebrate ten years of creativity using the Snap! derivative, TurtleStitch.</subtitle>
<track></track>
<abstract>Happy birthday TurtleStitch! Ten years after Andrea Mayr first presented Turtlestitch with a masterclass in Amsterdam, there are many reasons to celebrate. Over the last ten years, TurtleStitch has won the hearts of children, artists and educators around the world. In this birds-of-a-feather Turtlestitch enthusiasts will celebrate and share what they have made and how they have done it.</abstract>
<description>Happy birthday TurtleStitch! Ten years after Andrea Mayr first presented Turtlestitch with a masterclass in Amsterdam, there are many reasons to celebrate. Over the last ten years, TurtleStitch has won the hearts of children, artists and educators around the world. In this birds-of-a-feather Turtlestitch enthusiasts will celebrate and share what they have made and how they have done it.</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='894'>Richard Millwood</person>
</persons>
</event>
<event guid='Dij_w1AYDEjhQGS3uBxyYA' id='866'>
<date>2025-09-03T18:55:00+02:00</date>
<start>16:55</start>
<duration>00:30</duration>
<room>Seminar Room 2</room>
<type>Birds of a Feather (BoF)</type>
<language></language>
<slug>866-crowdsourcing-the-reference-manual</slug>
<title>Crowdsourcing the Reference Manual</title>
<subtitle></subtitle>
<track></track>
<abstract>The official Snap_!_ Reference Manual hasn&#39;t been updated since version 8.  We are converting it to an online format, with a Github repository to which anyone can make contributions.  The conversion process is theoretically automated, but turns out to need a lot of human TLC to make it really look right.  Come to this BOF if you&#39;re interested in contributing either to the conversion itself or to the updating, or if you just want to see what we&#39;re doing.</abstract>
<description>The official Snap_!_ Reference Manual hasn&#39;t been updated since version 8.  We are converting it to an online format, with a Github repository to which anyone can make contributions.  The conversion process is theoretically automated, but turns out to need a lot of human TLC to make it really look right.  Come to this BOF if you&#39;re interested in contributing either to the conversion itself or to the updating, or if you just want to see what we&#39;re doing.</description>
<recording>
<license />
<optout>false</optout>
</recording>
<persons>
<person id='2'>Michael Ball</person>
<person id='5'>Brian Harvey</person>
<person id='1543'>Victoria Phelps</person>
</persons>
</event>
</room>
</day>
</schedule>
