Category Archives: Topics in the Field

BIBFRAME: Knocking Down the Machine-Readable Language Barrier

Library catalogs are always evolving to accommodate new materials and new technologies, but we are currently in a period of particularly ambitious change. We have transitioned from indexes to classification systems, from physical catalog cards to databases capable of holding unimaginably huge stores of data—at least more than librarians transcribing information onto cards years ago would have believed possible. Currently, catalogers are moving from AACRII (Anglo-American Cataloging Rules, Second Edition) to RDA (Resource Description and Access) and from MARC (Machine Readable Cataloging) to  BIBFRAME (the Bibliographic Framework Initiative). That’s a lot of acronyms, I know, but bear with me—it’ll be exciting (for a given and very nerdy definition of exciting) in the long run.


MARC is a language apart from those used by programmers and app creators. BIBFRAME, when it happens (and it’s happening, sooner than you think!), will make the information stored in a library catalog interface much more easily with the syntax used by the non-library world. With that framework in place, we can knock down the language barrier between developers and catalogers and work together to create great things.

I and our Resource Description Head, Leigh McDonald, had the opportunity recently to attend a seminar hosted by the Potomac Technical Processing Librarians (PTPL) organization on the implementation of BIBFRAME. Beacher Wiggins, director of cataloging and acquisitions at the Library of Congress (LC) presented, as did four other librarians hailing from the LC, the National Library of Medicine, and the University of California, Davis. Wiggins provided an overview of the LC’s trial experience, in which they had a group of catalogers work on each record twice, once using their normal workflows, and once using the BIBFRAME toolkit. At this time, they have completed one six-month trial, with another set to commence at the beginning of 2017. Further details of this pilot program can be found on their Bibliographic Framework Initiative page, which includes comprehensive information on BIBFRAME, how it works, and downloadable file caches of their completed BIBFRAME records for reference. These records are not visible in the actual LC catalog, so this is currently the only way to see them. You can see a great side by side of a MARC record and a BIBFRAME record here (from Karen Coyle on the Web.)

It was surprising to learn just how far along LC’s program was, and that we are, in reality, growing steadily closer to the actual implementation of these concepts. I look forward to an update early next year on the success of their second trial.


The library world is full of information, books, electronic and physical, images, study spaces, but also plenty of numbers.

There are ISBN numbers, ISSN numbers, bib id numbers, oclc numbers, DOIs, PO numbers, item id numbers, holding id numbers, patron numbers, copy numbers, call numbers, and plenty of statistics. We count volumes, people, usage, the ins and outs, the uses and the non-uses.

Check out the staff view of a library record and you will see sizes and some of these numbers. And of course don’t forget bar codes as well as budget numbers.

There are always plenty of numbers…

Image from The Statistical Abstract Of The United States, 1934.

A Step Back in Time with an Eye to the Future

Posted by Dywana Saunders, Angie White and Crista LaPrade

In December, we had the opportunity to travel to Colonial Williamsburg for a rare behind-the-scenes tour of the Department of Collections. We were honored to have Ron Hurst, Chief Curator and Vice President for Collections, Conservation and Museums, as our guide through the impressive facility.

Dywana and Ron

The Wallace Collections and Conservation building is 70,000 sq ft of storage, curatorial offices, and conservation labs for Archaeological Materials, Wooden Artifacts (including furniture), Instruments and Mechanical Arts, Objects, Paintings, Paper, Textiles, and Upholstery. We had the opportunity to meet some of the curators and conservators and see the pieces they were currently working on in the labs. Each lab is specially equipped with the state of the art equipment to clean, preserve, and stabilize museum pieces. We were very impressed to see an actual eighteenth century red coat:



And watch as an exhibition building needlework rug was mended and blocked:

2014-12-18 10.01.04

The photography studio was huge. We were impressed to see a cat walk around the studio where particularly large pieces can be photographed. We also saw the special photo set ups for furniture and silver:

Photo Studio

We were also guided through the amazing storage areas located in this building. It was interesting to see rows upon rows of compact shelving. Delicate items were held steady on shelves with special weighted pads and shelves were covered with Plexiglas or fabric dust covers. Paintings, silver, and textiles could all be rolled out for curatorial examination and study. You can find out more about CW Collections and Museums online, as well as look through some of the collections.

In the afternoon, we met with the staff of the Digital History Center located in the John D. Rockefeller Library.

Lisa Fischer, Director, Peter Inker, Manager of 3-D Visualization, and Ted Maris-Wolf, Manager of Research and Content Development, graciously spent time sharing some of their current projects with us. They are working on some amazing projects that extend their reach far beyond the summer tourist visiting DOG Street. We were amazed by Virtual Williamsburg, 1776 which required collaboration among many people from many departments within Colonial Williamsburg to ensure accuracy of the 3-d modeling and to incorporate representative primary sources to depict a pivotal moment in time in our nation’s history. Virtual Williamsburg is a collaborative project with the Institute for Advanced Technologies in the Humanities that began in 2006. Work on this project is ongoing and truly impressive.

They also showed us Revquest, an interactive onsite game that allows visitors to Colonial Williamsburg to use their cell phone to find clues to solve a Colonial Era mystery. A new version, Revquest: Save the Revolution! is due out this spring.

And finally, they shared with us the recently unveiled site, Slavery and Remembrance. It is a truly unique and internationally collaborative endeavor being “a collaboration of UNESCO’s Slave Route Project, The Colonial Williamsburg Foundation, and dozens of sites and museums across the globe.”

To see the innovative work being done by our colleagues just 45 minutes or so down I64 was truly inspiring. We hope you all have the opportunity to visit Colonial Williamsburg in person sometime soon, however if you can’t make it there, you simply have to get online and experience it virtually!

Crista Angie and Dywana


Angie White, Librarian!

Angie White, who is the current Digital Production Coordinator in the Discovery, Technology, and Publishing Department (DTP) at Boatwright Memorial Library, received her Masters of Library & Information Studies from the School of Library & Information Studies at the University of Alabama just this December. Angie has been a full time staff member at Boatwright since May of 2013.

Angie is a 4th generation graduate from the University of Mary Washington in Fredericksburg, VA, where she earned her undergraduate degree in history.

After graduating UMW and taking a year off for an amazing bus ride across the country in a converted school bus (you’ll have to ask Angie to tell you this story), she applied to library school.

Her interest in history and working with historical documents from her time at UMW led Angie to the library program at the University of Alabama. She was also inspired by her mom, who worked at Swem Library during her own undergraduate days and loved it very much. Newly enrolled in the online library school in May of 2012, Angie also emailed Chris Kemp, Head of DTP, and asked if there were any internships in his department. She had heard about the Tokyo War Crimes project and thought it would dovetail nicely with her library school work. So, at the same time Angie was starting library school, she also started as an unpaid intern at Boatwright. As an intern, she worked on the Post-Soviet Resettlement project and the Centennial project.

When I asked Angie what she liked about library school (and remember this was all online except for a three-day campus visit the first semester), she said she liked the small class sizes, all of the stimulating discussion, and the group work. Since this was all online, I was surprised, and I asked her for more information. She said all of the classes were live, with students and teachers using Blackboard Collaborate. Her favorite collaboration tool was, and continues to be, Google Docs. Other platforms they used for their collaborations were Google Hangouts, Facebook, and Skype.

Angie also said she really enjoyed her last library class, which was on cataloging, as a great practical class. She also finds the philosophy of library systems, the fact that a library is made up of many moving parts that make it come together as a whole and how hard it can be to keep it all moving in one direction, very interesting. She is also a great proponent of the library’s missions of sharing and open access.

Angie’s work with the digital camera for digital projects has led to a love of photography, which has become a passion. Here is a post Angie wrote this past August on the DTP blog. My final question to Angie was… “and are you watching the new TV show, The Librarians?” and she answered with a resounding, “Yes!”

Angie White on the left and Tom Campagnoli on the right.

Angie White on the left and Tom Campagnoli on the right.

Finally, we all want to congratulate Angie White on her great accomplishment and appreciate her enthusiasm and all of her skills she brings to our department and to the library.

Guest Post: Reflections on AMIA 2014

Today we are featuring a guest post written by Dywana Saunders from the Media Resource Center at Boatwright Memorial Library:

I had the pleasure of attending the Association of Moving Images Archivists (AMIA) conference held in Savannah Georgia, October 8-11th. Conference presenters ran the gamut, from film archivists, museum professionals, entertainers, students, and film makers; some coming from all over the world. Session topics ranged from snippets on the newest advances with digital asset management systems (DAMs), the Public Broadcasting Metadata Dictionary Project (PBCore), and the Federal Agencies Digitization Guidelines Initiative (FADGI), to dealing with ancient and hard to repair and maintain AV equipment.

Continue reading

How you, dear reader, can help correct bad OCR

We have a problem that only human eyes can solve. Yours can help.

Here’s some background. In Discovery, Technology and Publishing, we use optical character recognition (OCR) software to extract text from document images in order to make them machine-readable and searchable. In simple terms, the OCR process works through a bit of binary “yes/no” logic – either something exists in a given place, or nothing does. No matter what kind of image you put into the software (color, grayscale, whatever), the application creates a temporary black and white version. That is the version to which the “yes/no” operation is applied – the resulting pixel patterns in the image are compared to “known character” patterns. Different software packages use different logic, but in the end all those “known characters” get put together and output to a text file – or something similar.

A black and white rendering of text from a Tokyo War Crimes Trial document. Your eyes can tell what most of these words are, but trust me - a machine is going to have a rough time.

A black and white rendering of text from a Tokyo War Crimes Trial document. Your eyes can tell what most of these words are, but trust me – a machine is going to have a rough time.

In the past we’ve done a variety of things with these files – from loading the pure text content into searchable database fields (as in a previous implementation of our America at War collection), to embedding the text within image files (the Student Research portions of the UR Scholarship Repository), and applying extensive XML markup to historical documents, enabling customized searching and manipulation of information (see our site focused on the published Proceedings of the Virginia Secession Convention). For folks who are dedicated to going paperless, there are plenty of OCR applications available for mobile devices, too.

OCR is a great tool, but the technology has limitations. Depending on the printing process that created an original document, a capital S might look a bit like the numeral 5 as a result of artifacts on the paper, a smudge of ink, or damaged type. The type of original materials we’re working with makes a difference, too: the high-resolution camera we use to digitize rare materials at Boatwright results in fantastic images, but the best camera on the planet can’t change the fact that microfilm is, well, microfilm. It’s a great format for preserving content, but a lousy medium from which to digitize. Occasionally, microfilm is all we have to work from.

Exposure problems during the microfilming process have a lasting impact on the usability of the images. Much of the text, particularly in the underexposed document to the right, is unreadable to an OCR application.

Exposure problems during the microfilming process have a lasting impact on the usability of the images. Much of the text, particularly in the underexposed document to the right, is unreadable to an OCR application.

Take our Collegian collection, for example. As part of UR’s 175th anniversary about 10 years ago, the full-run of the student newspaper, the Collegian, was digitized. Most of these issues existed only on seldom-used reels of microfilm rather than paper, and, as a result of the age of the papers when they were initially microfilmed, many of the resulting images were not ideal for OCR purposes. The software knew that there where characters in the images provided, and oftentimes the resulting text was way off base. If you’ve ever tried to identify long-passed family members in old, faded photographs, you have an understanding of what the OCR software is going through: you know that the person you’re looking for is there – recognizing them among the crowd is the issue. Take that one step further by attempting to identify every individual, and you’ll have an idea of the computational difficulty that the OCR process can sometimes face.


The 5th Marine Regiment in front of the US Capitol in 1919: Great-great-grandpa – where are you?

Fast-forward to 2014, and our Collegian collection is still online – in fact, among our digital collections, the Collegian regularly receives the highest volume of traffic. The difficulty with OCR remains, though we’ve recently incorporated a mechanism which allows users to correct the text output of the OCR process. The changes made to the underlying text files are reindexed and searchable immediately upon saving – talk about instant gratification.

So if you’re someone who is interested in the history of the University of Richmond from the students’ perspective, I invite you to contribute a little bit of time to enhance this collection. Simply click the image below, then the “Register” link at the top of the collection home page to get started.

Screen shot 2014-09-29 at 3.01.45 PM

Open Source, Free Like A Puppy…

Scott McNealy, co-founder of Sun Microsystems, is famous for once having said that “Open source is free like a puppy is free” (Donoghue).  He is, of course, talking about the expenses necessary for taking care of the free puppy.Corgi

Open source is kind of like that.  It is free by definition. defines open source as “pertaining to, or denoting, software whose source code is available free of charge to the public to use, copy, modify, sublicense, or distribute”.  That said, open source is actually much more than just free.  Open source is, for the large part, community-supported by people who have technology issues a lot like yours.  A person may need an application for something so, in some cases, they create it, maintain it, add functionality, put it out there for you to freely use and answer questions to help you bring the application on line.  Using McNealy’s puppy example, it would be like the puppy buying itself, coming home to your house by itself, house breaking itself and learning to fetch your slippers, again… all by itself.  It’s really hard for me to see the bad thing in this but, believe it or not, there are some valid concerns.

Open source software development is flourishing and very much in use all over the world.  While proprietary software companies complain about open source, Forrester Research reports that 76% of developers have used open source technology at some level (Baldwin).  That means even companies that create or purchase ‘off the shelf’ software use free, open source software tools to build with – companies like Apple, the first major computer company to make open source development a key part of its ongoing software strategy, and Microsoft who initially went to war against open source development.

”Open source is an intellectual-property destroyer,” former Windows chief Jim Allchin famously quipped in 2001. “I can’t imagine something that could be worse than this for the software business and the intellectual-property business” (Cooper).

And who can forget that old timeless classic…

“Linux is a cancer that attaches itself in an intellectual property sense to everything it touches,” former Microsoft CEO Steve Ballmer told the Chicago Sun Times a few months later. “That’s the way that the license works” (Cooper).

Now, however, in May 2014, Microsoft finally made official its unofficial decision to incorporate some open-source code into its developer and programming languages. More recently, Microsoft put 22-year company veteran Mark Hill in charge of a global group to cultivate open-source developers to write applications that work with Azure, the Microsoft cloud service that competes against the likes of Rackspace, Google, and Amazon (Cooper).

As Microsoft eventually came to understand, there are a lot of benefits to using open source.  To name just a few:

1. Keeps costs down.
2. Improves quality because code problems are resolved quickly.
3. Delivers agility by speeding up the pace of software development and innovation which allows businesses to react quickly and thereby not be dependent on vendors schedules.
4. Mitigates business risk by reducing dependence on a single or multiple vendors.

We use a mix of proprietary and open source software in Discovery, Technology, and Publishing to administer the library servers and applications such as the library catalog, digital collections and various departmental work flows.  There are times when we would like to have functionality that we don’t currently have but that’s been true of the vendor supplied software as well as the open source software.  For that reason, I don’t really distinguish between the two types because I just kind of see them as each being a toolbox that I need to use to get the job done.  Open source plays a huge role in our success as a department.

But let’s not forget that the ‘free puppy’ criticism does also have some merit.  The first thing is training.  People are resistant to change and so they are not likely to explore using an open source alternative application instead of Windows or Apple for things like their desktop or MS Office needs.  Another issue is support.  Proprietary software vendors provide support for their products and, if you use open source, you may have to provide your own developer to get the functionality you desire. Lastly, some great open source software development simply ceases for whatever reason and you may be left with no one to provide patches or software updates, again possibly requiring the hiring of a developer to maintain your software.

While these are valid concerns, open source application usage is growing quickly all over the world, in all industries.  Technology costs a lot of money and the financial advantage to using open source software must outweigh the ‘free puppy’ concerns or companies would not be moving in that direction.

On a personal note, I use open source software daily and I will always look for a free open source application before I buy something because I generally just need something for a single use or for a short time.  I use applications like Notepad++ which is better than the notepad built into Windows, 7-Zip which allows me to zip and unzip files better than the one in Windows, VLC Media Player which is much better than Windows media player for manipulating various video formats and WinSCP for transferring files.  I also use various open source tools like MultiMon Taskbar which allows me to have a task bar on my second monitor.

If you’ve never installed open source software, here’s some sage advice.  Make sure you research what you want to install by looking for reviews of the application before you download and install it.  Read the installation instructions and make sure you understand what they want you to do.  Try to download it from the site that actually produced it and not a third party site.  This just makes certain you are getting a ‘clean’ copy and not a possibly modified copy of the application you want.  Finally, there are probably a lot of applications just like the one you’re looking for so if you install it and you don’t like it, don’t give up.  Just uninstall and go find another one.

So… How ’bout that free puppy now?



Donoghue, Andrew “Open Source ‘is free like a puppy is free’ says Sun boss.” ZDNet. CBS Interactive, June 8, 2005. Web July 22, 2014.

Cooper, Charles “Dead and buried: Microsoft’s holy war on open-source software.” C|Net. CBS Interactive, June 1, 2014. Web July 22, 2014.

Baldwin, Howard “4 reasons companies say yes to open source.” Computerworld. Computerworld, Inc., January 6, 2014. Web July 22, 2014

Corgi puppy images from:

Notepad++ :
7-Zip :
VLC Media Player –
WinSCP –
MultiMon –

175 Years of Photography


2014 marks the 175th anniversary of the first permanent photographic image. August 19, 2014 is World Photo Day.

I am not an expert or a professional photographer, but I have a great interest in photographs and take a fair amount of them.

Here at Boatwright Library in the department of Discovery, Technology and Publishing we digitize a number of photographs and other images for preservation, access, as well as online exhibits.

Here is a nice table put together by the Library of Congress of the different types of photographs through time starting in 1839 with Daguerreotypes. The introduction of the Kodak box camera, the Brownie introduced in February 1900, brought photography to the masses and revolutionized photography as much as the digital camera has for our generation the past decade and a half.

Early photographs were on copper, tin, and later glass. There has been positive film as well as negative film. Negative films were made from acetate, nitrate, and later polyester. And don’t forget the instant Polaroid cameras. The camera and the photographic process continues to evolve.

Today of course, there is a camera on almost every phone and every event in one’s life can be captured and sent to family and friends by pressing buttons which sends an image through the air. It’s amazing.

The Brownie started our love affair with photographs and it continues today. And don’t forget to subscribe to our blog!

For more information:

The PBS series “American Photography: A Century of Images” is a good place to start.

The George Eastman House International Museum of Photography and Film with it’s over 170 videos uploaded to YouTube has tons of information on photography as well as film restorations they have been involved with.

The photographs of Abraham Lincoln are from the US National Archives flickr site.

3D Printing Primer, part 1

Wondering about 3D printing?  The CTLT has answers for you, and several printers!  You can follow their blog, Thinking in 3D for insights into 3D printing at UR and for tutorials.

When people think about 3D printing, they tend to think in terms of finished objects but have a hard time fathoming how you get from a digital file to an actual thing.  So, how, exactly, does a 3D printer work?  There are many different ways, but here are three models to consider:

Fused Deposition Modeling (FDM)

FDM is the additive technology many consumer-grade 3D printers are built around.  You can think of it as the “smart hot-glue gun” model of rapid prototyping.  Plastic (or other material) filament is heated, melted, and extruded through a nozzle and laid down in layers to create an object.  Stepper motors drive the nozzle on vertical and horizontal axes and a build plate is lowered on the Z-axis as the layers of extruded filament are built up.  Objects printed using this method are not solid but, rather, made up of “shells” that define a surface.  Often there is a honeycomb structure which makes up the interior of FDM objects.  Makerbots and Rep Rap printers utilize FDM.  There are limitations on the number and size of overhangs this process can accommodate, but it is possible to print models with overhangs by using a support structure akin to scaffolding that is printed on the outside of an object.  There tends to be a fair amount of post-processing with FDM to get satisfying models.

Makerbot Replicator 2

Makerbot Replicator 2

FDM objects printed with supports

FDM objects printed with supports

Granular Materials Binding

Granular Materials Binding fuses a powder with dots of glue, also in layers and also moving the build-area downwards until an object is built up. Models made with this method have interiors of solid powder, unless you design space into your object, but overhangs are easier to accommodate as you will always have support where you need it from the loose powder outside your object.  Granular Materials Binding is faster than FDM and allows for a higher resolution.  Color information can be included in models, as color from cartridges can be delivered at the same time as the binder (i.e. glue) and unused powder is never wasted as it can always be reclaimed and color is only included in the binder.  Post-processing of models is less time-consuming, but they are heavier than models printed in plastic because they are solid.

Projet 460 professional 3D printer uses granular materials binding

Projet 460 professional 3D printer uses granular materials binding

Object printed using granular materials binding

Object printed using granular materials binding

Stereolithograpy (SLA)

SLA is also an additive process that uses light (usually a laser) to cure a liquid (resin) to create a model.  There is a $100 3D printer currently being manufactured and beta-tested, the Peachy Printer, that uses this process, and was funded by the crowdsourcing platform Kickstarter.

Stereolithography method of 3D printing

Stereolithography method of 3D printing

These additive processes can accommodate manifolds, where every surface is connected to another surface.  The surfaces are mesh surfaces, in which every plane can be approximated with polygons.  3D models must be prepared for 3D printing using a software specific to your printer, which translates the digital model into a format your printer understands.  This is referred to as “slicing,” where the software slices a model and creates cross-sections which approximate curves.  This is how the printer will make shapes.  The code behind 3D models is G-code, a numerical control programming language that instructs machines how to move and delineates which paths to follow.  There are massive amounts of code that go into creating 3D printed objects.

Stanford bunny manifold, a widely-used test print for 3D printing

Stanford bunny manifold, a widely-used test print for 3D printing

Form rendered with a polygon mesh surface

Form rendered with a polygon mesh surface

But how do you make a 3D model to print?  Well, that’s another post!  Look for that in 3D Printing Primer, part 2.

Considering open data

Libraries have always been about open data, haven’t they?  Well, yes, in a way.  Aside from the notion of a free lending library, our bibliographic data is freely available and shared, if you know where to find it and if you know how to read it.  We do offer our users a lot of data that may appear to be “open,” i.e. free, but, in reality, we pay a premium to offer said data.  Publishers snatch up primary source materials and then sell it back to us, as if they’re doing us a favor.  A recent example being the Readex collection of documents related to slavery in the United States, The American Slavery Collection, 1820-1922.  John E. Drabinski has written a thoughtful essay about the inherent dilemma in charging for access to documents that make up our own cultural heritage.  Even our own faculty members are unable to free their research data due to agreements with publishers and the “publish or perish” nature of tenure.

We have been promised a future of linked data, which will make up the semantic web, where point A will lead to point B in such a way that is both novel and accurate.  Serendipitous discovery will live alongside the assurance that the John Smith you are interested in is the John Smith you are following through the tangled Web.  Which is great!  I can’t wait!  But we’re not there yet.  There has been encouraging work, most notably, for librarians, with the Virtual International Authority File, which integrates a number of national library authority files for names and provides a single numeric identifier for each, a URI or Universal Resource Identifier that can then be used across the web.  You can see it at work in Wikipedia by scrolling down to the bottom of biographical entries and checking out the “Authority Control” box.  Still a ways to go (subjects, anyone?), but it’s a start.opendataSo, want to get started freeing some data?  There are lots of ways you can start small.  As library folks, we are used to thinking about ways to make our data useful and transparent.  The rest of the world is really into this now, too, but we’ve been doing it forever.  So, consider contributing your talents!

I’ve often said I wished our library catalogs worked as well as Ravelry, the free database for fiber arts.  It’s interesting that Ravelry views itself as a community rather than a database, making the data it presents personal, and thereby relevant to its users.  Libraries have struggled with how to do this.  It’s something we’d like to do, but we’re, honestly, afraid of what it means for our stated aim of objectivity.  And that’s a serious concern.    Still, Ravelry manages to combine a materials database, a pattern database, and forums along with a personalized user experience.

Are you interested in 3D printing?  There are a lot of amazing repositories for free data files you can print yourself, the highest profile database of late being that of the Smithsonian’s own 3D modeling project, Smithsonian X 3D, which allows you to download and print models of artifacts from the Smithsonian’s collection.  Thingiverse allows you to browse, organize, and customize models contributed by other users and to contribute your own models.  Other museums have made their 3D scan files available to download, including The Met, which encourages creative use of their files to make new art, or mashups.  Want to get started?  You can find open source options for all the software you need to start creating your own 3D models. (TinkerCad, OpenSCAD, SketchUp, Blender)

There are also many citizen science projects you can contribute data to as well.  Perhaps one of the longest standing, the annual Great Backyard Birdcount, just happened earlier this month.  Maybe you’d prefer culling through radio signals to help SETI in the search for extraterrestrial life?

On the local front, there is a new group in Richmond formed as part of Code for America, called Code for RVA , which is a “civic hacking brigade” that works to “improve our city through better technology.”  Their next civic hack night, where they work on civic projects and hack open data, is Tuesday March 25th at 6pm and they’ll be working on a project using real-time data to build an app that lets parents and students know exactly where their school bus is.

Know of any other open data projects?  Share them in the comments.