Pixabay, a Creative Commons CC0 site, just launched this intuitive, fun search tool.
In our real environment, we “navigate” visually. In a supermarket, we quickly recognize where certain products are to be found: we first get an overview, go to the appropriate shelf, then search for the desired product and usually find it. We also know this hierarchical search principle from car navigation services. For searching images or products on the Internet however, such approaches so far do not exist. Picsbuffet is a new exploratory image search system to find Pixabay’s images easily.
In order to make this image exploration possible, all images are visually arranged on an “image map” according to their similarities. The currently displayed section of the map can be interactively modified by dragging and zooming with your mouse: more similar images are displayed by zooming in and zooming out provides an overview of thematically related image concepts.
After entering keywords for a search, a region with appropriate results is displayed: The heat map in the upper left corner shows the regions where the corresponding pictures can be found. Clicking on the heat map or on one of the five images below the heat map will jump to the corresponding region. If you click on an image its preview image and a link to the Pixabay page will be shown. Alternatively, you can start a new search for similar images.
Picsbuffet offers two views: in 2D mode – as the following screenshot shows – all images are displayed in square shape, the 3D view, which we already know, offers more overview, by displaying the images in a perspective view.
If you have found a region with images that you like, you can share this view (like these sunsets) by sending the current URL of the website.
The current version of picsbuffet works best with latest desktop browsers, a version for mobile devices is in development. Soon it will also be possible to search for images similar to an example image that you can provide.
Picsbuffet was designed and implemented by the Visual Computing Group at the Berlin University of Applied Sciences (HTW Berlin). Using a neural network, all images are automatically analyzed with regard to their content and appearance, which can be described very compactly with only 64 bytes per image. In a second step, these image descriptors then are used to arrange all images according to their similarity on a 2D image map. This is done with a hierarchical Self-Organizing Map (SOM). Further information and other demos, e.g. for automatic tagging of images, can be found on the Visual Computing Group website.
Want to see more? Here is a step by step video showing a search: