WholeSlide

Building the Index

This post will provide a high-level view of the indexing approach used by the DURA remote data source.

The DURA remote data source is simply a standalone elasticsearch server running somewhere in digitalocean.

General data model

A dura object has a few key properties:

  1. title
  2. short description
  3. type –> the corresponding class in the dura data model
  4. json description

To get search to work, I should add additional tags that enable me to slice across different facets.

For example, I would like to predefine queries for:

  • Species

For species, I would return any matching content type that is from an organism matching the species string.

  • General content type

For general content type, this would allow me to search for ‘all electron microscopy volumes’ or ‘all zoomify images’ or ‘all course lists’.

Thinking aloud

Adding relational search via graph-based mappings, while fun, is beyond the difficulty level of what I’m trying for …Although, it would open up the opportunity to do some great mappings, and possibly give me some overlap between the INCF NI-DM and DURA.

Everything on the collections page will be a search. Everything in the defaults.json generated by ipy will be a search query, description, and icon.

Steps to replace the current defaults

  1. Index the defaults in elasticsearch
  2. Write pre-canned query
  3. Write pre-canned query generator
  4. Rebuild defaults.json

EM Navigation

A quick demo of Dura, the core application behind WholeSlide Open. We’re quickly working towards a v1.0

Open Viewer With EM annotation alpha from Rich Stoner on Vimeo.

What can it do now? (build ~#4000)

  • View high resolution image sets from Brainmaps.org (uses boring 2D viewer, 3d in progress)
  • Link with Dropbox (sandboxed currently, needs appstore submission before this will work)
  • Connect to Knossos (EM) image services (experimental, issues with VRAM and opengl es 2)
  • Connect to OpenConnectome (EM) servers (experimental, issues with VRAM and opengl es 2)
  • Open converted EyeWire neuron meshes (experimental, issues with VRAM and opengl es 2)

What will it do in the future?

  • Connect to Aperio image servers
  • Contain most of the image resources available in the original WholeSlide app.
  • Connect to MicroBrightField’s Biolucida Server.
  • Connect to data stored on remote hard drives
  • View and create annotations on high resolution images
  • View stacks of high resolution images in 3D
  • Visualize skeletonized annotations for EM
  • Visualize volume annotations for EM
  • Include data sources beyond neuroscience (dermatology, radiology)
  • Integrate a searchable backend
  • Create courses and link content together

History: Nuance Speech Annotation

A brief demo of the speech recognition annotation feature in the WholeSlide pathology engine. The speech recognition is used to rapidly annotate regions of interest to quickly share with other clinicians and researchers. Speech recognition is powered by the Nuance HealthCare SDK.

This submission eventually would place 2nd in Nuance’s Speech SDK Hackathon.

History: Digital Library

When Cambridge released high resolution scans of Newton’s notebooks, I quickly modified the WholeSlide source code to create a demo of how this technology could be used.

Due to the closed nature of the code however, I was unable to continue work on the Newton project. It’s an ember I hope to rekindle with the release of an open source WholeSlide v2.

History: WholeSlide V1

The original WholeSlide was written as an exercise to see how well the native UIKit libraries had evolved on the iOS platform. Once I was able to get comfortable with the ObjectiveC syntax and documentation, the rest fell in to place. WholeSlide was much more efficient and compact as a code base than anything I had written in c++ for iOS previously.

History: WholeBrainCatalog Mobile Engine

Some of the first ideas for WholeSlide came from work I did with the WholeBrainCatalog group at UCSD. Starting from nothing, we were able to piece together an app for viewing high resolution image data in an OpenGL context. The entire application was written in c++ using the openframeworks toolchain.

At various points it compiled for iphone, ipad, osx 10.5, and could even be controlled with the kinect.

The source code is still online at http://code.google.com/p/wbcmobileengine/

The Blog Is Alive

This blog will house the updates for the open source project along with any collaborations.