Introduction to functional programming principles, including immutability, higher-order functions, and recursion using the Clojure programming language. This workshop will cover getting started with the Clojure REPL, building programs through function composition, testing, and web-development using ClojureScript.
This workshop will do a deep dive into approaches and recommend best practices for customizing Blacklight applications. We will discuss a range of topics, including styling and theming, customizing discovery experiences, and working with Solr.
We all encounter failure in our professional lives: failed projects, failed systems, failed organizations. We often think of failure as a negative, but it has intrinsic value -- and since it's inevitable that we'll eventually experience failure ourselves, it's important to know how to accept it, how to take lessons from it, and how to grow from it professionally. Fail4Lib, now in its 5th year, is the perennial Code4Lib preconference dedicated to discussing and coming to terms with the failures that we all face in our professional lives. It is a safe space for us to explore failure, to talk about our own experiences with failure, and to encourage enlightened risk taking. The goal of Fail4Lib is for participants to be adept at failing gracefully, so that when we do fail, we do so in a way that moves us forward. This half-day preconference will consist of case studies, round-table discussions, and, for those interested in sharing, lightning talks on failures we've dealt with in our own work.
Intro to programming in Ruby on Rails
Amazon Web Services currently offers 58 services ranging from the familiar compute and storage systems to game development and the internet of things. We will focus on the 20-some services that you should be aware of as you move your applications to their cloud.
The morning session will be mostly overview and the afternoon session will be more practical examples and discussion. This could be broken into two sessions.
FOLIO is a library services platform -- infrastructure that allows cooperating library apps to share data. This workshop is a hands-on introduction to FOLIO for developers of library apps. In this tutorial you will work with your own Vagrant image through a series of exercises designed to demonstrate how to install an app on the platform and use the data sources and design elements the platform provides.
REQUIREMENTS Laptop (4GB) with Vagrant installed.
Have an idea for an app? Want to work with FOLIO developers and others in the community on the FOLIO platform to make it happen. Come to this half-day hack-a-thon! Ideas for new developers will be posted in the project Jira, or bring your own concepts and work with others to make them reality.
REQUIREMENTS Laptop (4GB) with Vagrant installed. Attending the FOLIO Tutorial is recommended, but not required.
In this workshop, we will step through the various types of applications that can be built with Google Apps Script.
(1) Custom cell formulas
(2) Spreadsheet Add On Functions (menu items, time based triggers)
(3) Google Apps Script as a Web Service
(4) Google Apps Script Add-Ons that can be shared globally or by domain
In this workshop, we will build sample instances of each of these types of applications (wifi-permitting) and spend some time brainstorming additional applications that would be useful for the library community.
Sample Applications: http://georgetown-university-libraries.github.io/#google-sheets
Calls to mindfulness and self care can have mixed reception in our field. While some view this important work as navel-gazing or unnecessary, it is integral to being present and avoiding burnout. Often this skewed attention to output comes at the expense of our personal lives, our organizations, our health, our relationships, and our mental well-being. Learning to prioritize self-care is an ongoing project among those who perform emotional labor. While some view the work of mindfulness as self-indulgent, it has proven to keep many on the track of being present and avoiding burnout.*
The purpose of this preconference is to provide a short introduction to self care and mindfulness with practical work we can use regardless of setting. We’ll discuss microaggressions and allyship (microaggressions being the brief and commonplace verbal, behavioral, or environmental indignities that marginalized people of various groups experience daily and allyship referring to the powerful role that individuals from privileged groups can play in supporting marginalized individuals). We will then transition to a modified unconference setting where participants can practice scenarios and learn practical solutions. Each of the presenters has different set of skills and experiences that allow for many techniques and strategies to be explored. Preconference attendees will participate in sessions like “Mentor Speed Dating” where they get to talk to and question potential mentors/mentees. They may be coached through a guided meditation or walked through a calming breathing exercise. For those looking to a more physical space, office yoga and stretching techniques may be shared depending on the outcomes of the unconference interest.
Foundational materials and articles will be shared with the registrants prior to the meeting with the option of further discussion at the workshop. An open access guide to all the resources and readings will be available after the preconference, and people will be encouraged to share additional their tools on a website.
Suggested Hashtag #c4lselfcare
* Abenavoli, R.M., Jennings, P.A., Greenberg, M.T., Harris, A.R., & Katz, D.A. (2013). The protective effects of mindfulness against burnout among educators. Psychology of Education Review, 37(2), 57-69
In this preconference, participants will be introduced to Virtual Reality uses in library settings, notably, by way of the VR Reading Room. Within the VR Reading Room prototype, users can collaboratively explore digital collections (e.g. HathiTrust) by way of VR headsets. Participants of this workshop will have the opportunity to experience HTC Vive functionality. The system will be setup with a prototype e-book experiment in order to model several VR affordances. Once attendees have been introduced to the HTC Vive hardware and sample project, groups of participants will have an opportunity to further brainstorm novel uses cases.
Python has become one of the dominant languages in scientific computing and is used by researchers around the world. Its popularity is due in large part to a rich set of libraries for data analysis like Pandas and NumPy and tools for exploring scientific code like Jupyter notebooks. Join us for this half-day workshop on the basics of using Pandas within a Jupyter notebook. We will cover importing data, selecting and subsetting data, grouping data, and generating simple visualizations. All are welcome, but some familiarity with Python is recommended, e.g. the concepts covered in the Codecademy or Google Python courses.
Learn about the features and capabilities of Sufia, a Hydra-based repository solution. Attendees will participate in a hand-on demonstration where they deposit content, edit metadata, create collections, and explore access control options. Attendees should bring laptops with Chrome, Firefox, or Safari installed. Please plan on bringing at least one image, document, or other digital content that you're comfortable uploading and using for demo and experimentation purposes :)
The web can be a trove of openly accessible data, but it is not always readily available in a format that allows it to be downloaded for analysis and reuse. This workshop aims to introduce attendees to web scraping, a technique to automate extracting data from websites.
Part one of the workshop will use browser extensions and web tools to get started with web scraping quickly, give examples where this technique can be useful, and introduce how to use XPath queries to select elements on a page.
Part two will introduce how to write a spider in Python to follow hyperlinks and scrape several web pages using the Scrapy framework. We will conclude with an overview of the legal aspects of web scraping and an open discussion.
You don’t need to be a coder to enjoy this workshop! Anyone wishing to learn web scraping is welcome, although some familiarity with HTML will be helpful. Part two will require some experience with Python, attendees unfamiliar with this language are welcome to stay only for part one and still learn useful web scraping skills!
Paper prototyping is a low-cost, structured brainstorming technique that uses materials such as paper and pencils to better understand the way users interact with physical, visual, and textual information. It can help us learn how to better think through workflows, space design, and information architecture. Session attendees will learn about the ways low-fidelity prototyping and wireframing can be used to develop ideas, troubleshoot workflows, and improve learning and interaction.
In the first half of the workshop, participants will step through activities in icon design, persona development, and task development. In the second half they will develop a low fidelity prototype and step through a guerilla usability testing process with it.
This half-day workshop is an overview and hands-on introduction to the Open Science Framework and the SHARE data set, two tools that form a powerful combination for supporting scholarship and research locally as well as improving scientific integrity and allowing for new forms of meta-research.
Developed by the Center for Open Science, the Open Science Framework (OSF; http://osf.io) is a free, open source tool that works within the research workflow to allow for better management, curation, streamlining, and sharing of scholarly outputs. SHARE builds its free, open, data set (https://share.osf.io/) by gathering, cleaning, linking, and enhancing metadata that describe research activities and outputs—from data management plans and grant proposals to research data and code, to preprints, presentations, and journal articles.
In this workshop, participants will learn to use the OSF to develop embedded data stewardship and research management services for faculty. Attendees will also learn how to leverage and enhance SHARE data to improve their institutions’ understanding of the whole scholarship ecosystem happening on their campuses.
This workshop will be divided into two parts. First, attendees will learn strategies to provide curation and research services to the faculty workflow by operating in the OSF. Practical approaches to faculty collaborations and curation assistance throughout the research life cycle will be discussed. The second part will focus on harnessing the power of the SHARE data set to discover and act upon the research outputs of an institution or organization. This hands-on portion of the workshop will use IPython/Jupyter Notebooks to access the SHARE API and search across 129+ different providers and export and clean the metadata.
Participants are encouraged to bring laptops in order to follow along. No previous programming experience is necessary.
Understand how the OSF works within the researcher workflow and how it can improve scientific integrity while also fostering collaboration.
Learn to leverage the SHARE API to better understand the intellectual and scientific contributions of a university.
Develop an understanding of the basics of good data-management practices.
In recent years, Code4Lib’s growth as a vital, evolving Open Source community has begun to highlight a common issue in software deployment and service management: what happens when the growth of users outpaces the ability to update or maintain software that serve as integral tools to the community, like our conference voting service?
We are just one of many communities facing this problem. In this pre-conference, we seek to re-examine the model for developing and maintaining open source tools in a community-driven environment and address the problem of how to continue to provide services in a sustainable way. Current workflows based on aging, increasingly inflexible software that is carried on the backs of one or two individuals with no institutional buy-in to continue support is not sustainable or fair to expect.
The first half of this pre-conference will focus on the big picture, detailing the new role that software plays in providing services to an organization or community. We will develop sustainable strategies for implementing software and building a blueprint for using open source tools to meet software-addressable needs, examining and considering the need fulfilled rather than the software used. In the second half of the day, we will apply these strategies to the case study of diebold-o-tron, Code4Lib’s trusty voting software, currently maintained and deployed annually through the generosity of individuals, and develop a pilot strategy for treating this as a service maintained by a community rather than a piece of software maintained by an individual.
Attendees will come away with:
- New perspectives on software’s role in a service-based organizational ecosystem
- Strategies for building team models needed to sustainably support software in a community or organization
- Methods for identifying healthy Open Source projects and programming languages to develop a sustainable infrastructure
Sustainable software is developed at many levels beyond the code itself. For this workshop, we seek the input and collaboration of everyone in the Code4Lib community -- service and project managers, developers, and most of all, users.
The Hydra community has adopted the Portland Common Data Model (PCDM) as a structural data model. By using PCDM as a standard, software can be developed that supports data interchange and reusable components. In this workshop, we'll introduce Hydra-Works, a library that allows the Hydra community to use the Ruby programming language to write data to Fedora using the PCDM standard. Additionally we'll look at how the Hydra components index metadata into Solr, create user visible derivatives and controls access to resources. Participants in this workshops should have some code experience and bring a laptop. If you’ve been wanting to try Hydra but aren’t sure how to get started, come join us for a gentle introduction to building a digital repository with the Hydra technology stack.
This will be a half-day, hands-on workshop covering data modeling primarily in RDF. We hope to bring a diverse group of Code4lib community members together to learn, discuss, and understand the basics of data modeling. This modeling work will be taught in the context of interoperability efforts within the Hydra and Fedora communities, particularly in relationship to the development of the Portland Common Data Model (PCDM). We will discuss how data models use a number of standards, discuss how models are used in the context of software design and development, and walk through the different ways to represent models. We will compare and contrast data modeling with metadata standards/profiles. We will walk through modeling efforts around PCDM and its place in our work with digital objects - but this workshop will not focus on PCDM alone or even primarily (this is not a PCDM or RDF workshop). The workshop is intended to serve as the basis for a second workshop (“Data Modeling 201”), proposed separately for as a half-day afternoon workshop.
This will be a half-day, hands-on workshop covering data modeling primarily in RDF, building on the separately-proposed “Data Modeling 101” workshop. Participation in Data Modeling 101 is not necessary, but attendees should have a basic familiarity with data modeling and/or RDF. The focus for the afternoon workshop is the hands-on, collaborative creation of examples and models for digital objects using the Portland Common Data Model (PCDM) and other linked data vocabularies. Participants should bring types of objects they want to model, and if possible, provide additional information such as sample records, a METS profile, diagrams, or other documentation. We expect this session will produce PCDM (or other models) examples, documentation, model extensions and work that will be shared back with the broader PCDM community.
Applications are constantly improving and evolving. The applications we use today will be replaced by those of tomorrow. From a continuity and preservation perspective, maintaining our data through time is a critical requirement in this dynamic environment. With that in mind, the Fedora community is focused on ensuring that content can be imported and exported over standard protocols in standard serializations.
Introducing, the Fedora Import/Export tool. This tool is under active development, intending to provide robust and flexible import/export functionality for migrating data, packaging data for preservation, and other uses. This session will provide an overview of the development and requirements gathering that has gone into building the tool, an update on implementation efforts, and guidance on how the tool compares to other import/export/backup/restore tools. Attendees will get hands-on experience using the tool to export from, and import data into, a Fedora repository running on their laptops. Finally, we'll discuss planned improvements to the tool, and solicit feedback on future development efforts.
One of the driving requirements of a digital repository is that it continue to be performant as it scales. In that theme, significant community effort has gone into the testing of Fedora’s performance characteristics.
This session will offer:
1. A brief, hands-on, survey of interacting with Fedora
2. An update on Fedora’s performance and scale testing status
3. A collaborative activity in defining future-facing testing requirements
4. Some hacking on new, repeatable testing scripts
This workshop will provide hands-on exposure to the InterPlanetary File System (IPFS) (http://ipfs.io) and cover the core technical underpinnings of the distributed web -- particularly Merkle DAGs and Distributed Hash Tables, which are important components of tools like Git, BitTorrent, Dat and IPFS.
Libraries are a target rich environment for black hat hackers. Learn the tools and techniques they use and and build your offense skills. Learn how to find and fix security issues before the bad guys find and exploit them. We’ll also talk about easy strategies to make your library assets defensible. We all know we should use good passwords, keep everything updated and follow other basic precautions online. Understanding the reasons behind these rules is critical to help us convince ourselves and others that the extra work is indeed worth it. Who are the bad guys? What are tools are they using? What are they after? Where are they working? How are they doing it? Why are we all targets? We'll talk about how to stay safe at the library and at home. Many of the most effective strategies for IT security are free and easy to learn. We'll talk ways to keep your precious data safe inside the library and out -- securing your network, website, and PCs, and tools you can teach to patrons in computer classes. We’ll tackle security myths, passwords, tracking, malware, and more, covering a range of tools and techniques, making this session ideal for any library staff.
In this hands-on workshop, we will analyse/mine texts using a couple of basic techniques and available tools. It includes:
- extracting (named) entity references from running text
- classifying text types (e.g. newspaper article vs novel vs letter)
- topic modelling
- determining quality of OCR'd text using dictionaries
Leiden University Libraries started the Centre for Digital Scholarship in 2016, with supporting researchers in using text and data mining among its core services. This workshop is based on existing openly licenced materials and (growing) experience in supporting digital humanities research at Leiden University (in The Netherlands).
This Ally Skills Workshop is based on curriculum developed by the Ada Initiative and Frame Shift Consulting. The workshop teaches simple, everyday ways to be an ally to marginalized people in our workplaces and communities. Participants learn techniques that work at the office, at conferences, and online. The skills taught are relevant everywhere, including those particularly relevant to open technology and culture communities. At the end of the workshop, participants will feel more confident in speaking up to support marginalized people, will be more aware of the challenges facing marginalized groups in their workplaces and communities, and have closer relationships with the other participants. Please note, previous versions of this workshop focused on supporting women in technology; this is an updated curriculum that has been expanded to include questions of racism, sexism, ableism, homophobia, and transphobia.
Impostor syndrome, common among under-represented groups in technology work and academia, is the feeling that you aren't qualified for the work you are doing and will be exposed as a fraud. This workshop will discuss the syndrome and lead participants through writing and discussion exercises designed to combat it. This workshop is based on curriculum developed by the Ada Initiative, and builds on published and replicated research shown to reduce feelings of impostor syndrome.
Building applications and microservices using the power and flexibility of Linked Data through RDF triplestores present libraries and cultural heritage institutions an incredible opportunity to grow and manage extensible knowledge graphs for their patrons, institutions, and communities. Participants will be presented with these three examples of RDF-based applications and services that use bibliographic and organizational information modeled in BIBFRAME 2.0 and Schema.org RDF triples:
1. Colorado College Senior Thesis Self-Submission Application (https://github.com/Tutt-Library/ccetd.git) - This application allows seniors at Colorado College to self-submit their thesis along with any accompanying datasets, video, or audio to Colorado College's Fedora-based institutional repository. Modeling the institutional, departmental, and faculty relationships as RDF Schema.org linked-data for this application provided the additional benefit of being the genesis of Colorado College's mo re general knowledge graphs for other uses instead of being isolated in an application silo.
2. The Colorado Alliance of Research Libraries BIBCAT Pilot (https://github.com/KnowledgeLinks/alliance-bibcat.git) - Using selected MARC 21 records from Colorado College and the University of Colorado Boulder that were generated from the Alliance's Gold Rush comparison service, this project uses the BIBCAT -short for bibliographic catalog- an open-source project originally funded by a contract for the Library of Congress to transform MARC 21 into BIBFRAME that is then published to the web as Schema.org JSON-LD for indexing by Google, Bing, and other search engines. BIBCAT uses RDF rules that map MARC 21 fields and subfields to BIBFRAME 2.0 entities and properties.
3. DP.LA Service Hub for Colorado and Wyoming (https://github.com/KnowledgeLinks/dpla-service-hub.git) - A State Library of Colorado sponsored effort to aggregrate the metadata from across different libraries and museums in Colorado and Wyoming and provide a JSON-LD DPLA Map v4 feed to DP.LA. This project uses BIBCAT to transform different formats and vocabularies from multiple sources including Denver Public Library's RDF Dublin Core, Colorado College and University of Wyoming MODS metadata, and a metadata provided as custom CSV file from the History Colorado museum into BIBFRAME 2.0 entities stored in a triplestore. BIBCAT uses RDF-based rules to ingest these sources while allowing for easy customization and modification through simple editing of RDF turtle file.
Unlike many Library Linked-Data events, this preconference's hands-on focus is to help participants start their own development of RDF applications and micro-services. While the focus will be partipatants on writing their own RDF rules using Turtle for manipulating their own metadata, participants will also be introduced to the underlying open-source Python and Haskell code used in the RDF Framework platform.
The rapid advent in the technologies of augmented and virtual reality (VR) in the last several years and the surge down in price creates possibilities for its increasing and ubiquitous application in education. A collaboration by a librarian and VR specialist led to testing opportunities to apply 360 video in academic library orientation. The team seeks to bank on the inherited interest of Millennials toward these technologies and their inextricable part of a growing gaming environment in education. A virtual introduction via 360 video aims to familiarize patrons with the library and its services: http://bit.ly/VRlib. I short Surveymonkey survey following the virtual introduction assesses learning outcomes and allows further instruction when necessary. Patrons can use any electronic devices from desktop to any size mobile devices. Patrons can also watch in panorama mode, and are provided with goggles if they would like to experience the VR mode.
The next step is an introduction to basic bibliographic instruction, followed by a gamified “scavenger hunt”-kind of exercise, which aims to gamify students’ ability to perform basic research: http://bit.ly/learnlib. The game is web-based and it can be played on any electronic devices from desktops to mobile devices. The game is followed by a short Google Form survey, which assesses learning outcomes and allows further work shall any knowledge gaps occur.
The team relies on the constructivist theory of assisting patrons in building their knowledge in their own pace and on their own terms, rather than being lectured and guided by a librarian only.
This proposal envisions half a day activities for participants to study the opportunities presented by 360 video camera and acquire the necessary skills to collect quickly useful footage and process it for the library needs. The second half of the day is allocated for learning Adobe Dreamweaver to manipulate the preexisting “templates” (HTML and jQuery code) for the game and adapt the content and the format to the needs of the participants’ libraries.
Provisioning a server by hand is an onerous job, but it's one most library developers have done. And you've certainly heard that there are tools to help you manage this task in a more organized fashion.
This workshop will walk you through how to use Ansible , one such tool, to set up a new service on a new machine. The focus will be on hands-on learning, walking through the common mistakes one can make when using Ansible. You'll gain a confidence in the tool, and learn that the error messages Ansible returns are actually useful in finding those mistakes.
You will quickly discover that using Ansible is pretty similar to what you've previously done by hand. You may even have shell scripts written to help you with provisioning; those existing scripts can easily be modified to work with Ansible.
We will also make use of Serverspec , a tool which allows you to characterize the services running on an existing server, and then use this specification to test and verify the results of your efforts with Ansible.
Audience: developers, with some operations experience.
Requirements: participants should bring a notebook computer on which they have admin privileges. Ideally, participants should already have installed VirtualBox , Vagrant , and Git .
Acknowledgements: The curriculum for this workshop was originally developed by Alicia Cozine, of Data Curation Experts.
How does your library exist in the Linked Data world? Is it a foaf:Organization, a schema:Organization, org:Organization, etc? Once you've decided that, what properties does your library have? For example, is it called by an rdfs:label, or skos:prefLabel, or dcterms:title, or something else entirely?
These questions don't have a single correct answer, and therein lies the challenge of modeling data in RDF. Whether it's descriptive metadata, digital object relationships, or places, people, and things, you need to understand the options for mapping data and the pros and cons each mapping brings. This workshop will not be a presentation, but a collaboration: we are relying on attendees to provide concepts they've struggled to map in the Linked Data world. We would then go through the process of discussing as a group how to map those concepts and see if we can come to agreement as to the best approach(es).
Ideally the end result of this workshop will be more uniform linked data and more standardized practices shared between organization. Because while one can map anything into RDF, the real challenge is doing it in a way that others outside of your institution will be able to make sense of.
This is a hands-on workshop to explore Drupal 8, the next generation of Drupal that uses RDF as part of its core and is a part of the technology stack for Islandora CLAW. While this version offers native support for schema.org, there are many possibilities that can allow site builders and developers to extend the use of linked data in Drupal 8.
- Working with RDF in Drupal 8 out of the box
- Consuming and displaying RDF data using SPARQL/Linked Open Data endpoints
- Creating content types and resources based on schema.org, testing with RDFa tools
- Exploring the possibilities of semantic data modelling with Drupal 8 with the RDF Mapping API
Are you or someone you know new to digital or software projects in the library or humanities?
Would you like to learn the common skill sets that are in demand for practitioners of digital projects?
Do you want to contribute or try out some of the cool new DL and DH applications out there?
This workshop is for those new to the Digital Library or Humanities and seeking to sift through some of the common tools used by those in the field. The vast number of projects can be intimidating and the technology choices can feel endless.
In this workshop we will go over some of the basic skills useful to participate in a digital project, common technologies, and popular software used.
Participants will come away with a better understanding of some of the foundational skills used for digital projects in the library.
Topics covered include:
Command line basics
Version control, git and github
SSH and security keys
Common data transfer types such as json and xml
Basic data manipulation tools
Skill level: Beginner
No programming knowledge is assumed
Wherever there is text data, the need to parse, classify, and extract information may arise. There exist two easy-to-use and battle-tested tools for Pythonistas to tackle text processing: NLTK and spaCy. We’ll lead participants step by step through common NLP tasks using sample text data or their own text data. We argue that classification is at the heart of most useful applications of NLP and that recognizing this key insight is the foundation for pulling meaning out of a sea of characters.
Islandora is an open source digital repository framework used to preserve and expose special collections, scholarly publications and research data. It combines the Drupal CMS and Fedora Commons repository software, together with additional open source applications. The framework delivers a wide range of functionality out-of-the-box and offers the flexibility of customization to meet emerging functional requirements.
This Islandora workshop will explore multiple uses and implementations of Islandora by community members such as Betsy Coles from the California Institute of Technology (Caltech), Zach Vowel of the California Polytechnic State University (Calpoly), Aaron Krebeck of the Washington Research Library Consortium (WRLC), and others. They will describe their digital projects, how Islandora was utilized, overall experience, and top takeaways.
The workshop will cover topics such as:
- Single site and Consortial Repositories (governance, management, sustainability)
- Metadata Wrangling (pre-processing, batch updating and XML forms)
- Content Modelling Advancements (EADs, TEI, and more)
- Batch Ingesting Content (via Drupal UI and Drush shell scripts)
- Discovery & Harvesting (Solr, Search Engines, DPLA and more)
All examples will be taken from both pilot and production repositories. Users will leave with an understanding of the Islandora software framework and community, the underpinnings of Islandora content modelling and metadata requirements, the ability to batch ingest content, and the knowledge of how to expose repository content to search engines and aggregators for discovery and re-use. We look forward to seeing you there!
Libraries must ensure that users of all abilities can successfully use the technologies we provide. Despite the many ethical and legal motivations, not all of our technologies meet accessibility standards. Ultimately, the responsibility for making technologies accessible falls to the developers and vendors, but it is the responsibility of library staff to facilitate accessibility of information to all patrons, regardless of ability. Advocacy from library staff of all levels and duties is crucial for ensuring that access for persons of differing abilities is a mandatory priority in library technology services.
This workshop is about providing library workers of all technical backgrounds with a foundation of knowledge and skills to begin actively advocate for conformance to accessibility standards. Attendees will learn of key topics in accessibility: different ability types, types of assistive technologies, legislation and standards, and related issues of usability and respect. Although emphasis will not be placed on the finer details of implementing accessible design, relevant technologies and practices such as ARIA, screen readers, etc. will be covered. Heuristics and other simple independent accessibility testing practices will be taught and experienced through interactive exercises. Attendees will also gain practice in how to successfully converse with vendors and developers about accessibility. By the end of the workshop, attendees will have a formidable toolkit of vocabulary, methods, and additional resources enable them to begin and grow their advocacy for equal access for all persons of differing abilities in their libraries and institutions.
This workshop is targeted at people new to web accessibility but also to anyone wanting to learn more. A survey sent prior to the conference will help determine the topics and depth to be covered. Some experience with HTML is desired but not required.
Heard about APIs but don’t know where to start building something? Created an application using an API but don’t feel like it’s ready to share with others? This workshop will provide tips for how to identify and successfully incorporate APIs into a sharable application in a scalable fashion. We'll cover the basic principles, concepts and tools for interacting with APIs from authentication with OAuth to REST API patterns. Then we'll look at the coding techniques for making an API driven application sharable and scaleable. We’ll look at strategies for identifying and storing application configuration information; examine how to write tests for application and API interactions so you can be confident of the quality of your code and discuss techniques for ensuring that your application can be debugged in the event of a failure in the API. Lastly, we’ll talk about how manager community feedback and contributions to the applications.
The International Image Interoperability Framework (IIIF) is set of technical specifications built around shared challenges in cultural heritage access. Many institutions have scanned large portions of their collections, producing a large body of high-quality images. To provide access to these images and supporting structure and information, IIIF describes an interoperable delivery and interface description method that has been used to address access and reuse of images at many of the world’s largest national and university research libraries, museums, archives, and galleries. A large and growing ecosystem of interoperable software has developed to support each step of image delivery and user experience. This workshop will provide an overview of the IIIF specifications and available software, hands-on training installing IIIF client and server software, and a question and answer session to address your institution’s use cases.
Workshop attendees do not need any prior experience with IIIF and all are welcome.