Live update from ISPRS 2012 – II

Live update from ISPRS 2012 – II

Hi again folks! I am back – below is a little coverage of what happened in the remaining geoviz sessions, as far as I could capture. Please comment away or drop me a line if you think something needs correcting or adding.

Our two technical sessions on Thu AUG 30th 2012 started with our working group chair Chris Pettit presenting his paper (co-authored by five others) titled Visualisation Support for Exploring Urban Space and Place; demonstrating clearly the challenge we’re facing in terms of heterogeneity of data sources, tasks and audience profiles. He also presented a neat framework and an implementation that may just make these various un-ruly data sets behave. Check their abstract out.

I presented our take on trying to develop a geovisual analytics toolbox on open-source QGIS after Chris. This is mainly Marco Bernasocchi’s work, it can handle multiple variables, plot them on multiple linked views (scatter plot, time vs value plot, helix view for cyclic data, and 3D terrain & globe view, including stereo, for providing topographic context). Development was user-centric (starting with a focus group and ending with a pilot evaluation). Check out the QGIS plugins at Marco’s github pages (Multiview) or read the abstract, if it sounds like something you’d want to check out.

Jean-Philippe Aurambout followed me, presenting their endearing (they study animals) and unusual-for-me (they study what the animals do when, including their .. um.. urinating and defecating behaviors!) study to monitor understand farm animals better. There are industrial, environmental and sustainability implications for future farming in this study. See more here.

Gavin McArdle and his colleagues have analyzed mouse trajectories when executing spatial tasks using both visual analytics and clustering techniques to see similarities (spatial, attribute, semantic) – interesting to watch what’s going on over on your screen when you click away. It has implications for testing and validating methods (different kinds of trajectories), as well as of course observing user behavior as they solve spatial tasks. More here.

In the session that followed I was the session chair (so you see why I wasn’t blogging yesterday). My three speakers (one no-show) have each delivered talks that were followed with active discussions. First speaker was Bo Wu, who talked about their (with H. Wong) work on light pollution; and how we can model it, identify when and where it disturbs people’s sleep, or astro-photography efforts. He looked for patterns (e.g. rich neighborhoods vs poor, tall buildings vs not-so-tall, etc) and found some. Read more here. I am left with the question “can we introduce some sensible limitations for light pollution?” And do geovisualizations change policies? Out to change the world!

Aleksandra A. Sima delivered a very nice presentation of their work (two more collaborators on her paper) on how to create the perfect texture maps from terrestrial photos – her examples were geological but the methods seemed as if one could just snatch them and use it in any scenario where you want to create the best quality mosaic images from a large bunch of photos. See it for yourself!

Bharat Lohani (with S. Ghosh) presented a full user experiment with a bunch of visualization (stereo and non-stereo, based on LIDAR). 12 alternatives were presented to 60 participants, and asked them six questions; geared towards them reporting how well they perceive certain spatial features. They did an in-depth statistical analysis of the findings — see their abstract here.

Today — Friday, Nick Hedley gave two talks; one with C. Lonergan on “exploring the abstract while in the real world” (i.e. augmented reality). When I’ve seen the title I thought he was talking about cloud computing (Controlling Virtual Clouds and Making it Rain Particle Systems in Real Spaces using Situated Augmented Simulation and Portable Virtual Environments), but he wasn’t! He was rather talking about simulating particle movements and overlaying them on real objects. Pretty cool, eh? The next talk (with C. Chan) was on tsunami evacuation maps. They are static, poorly depicted, single view maps; he said. What if we could offer mobile maps? And what if they were ‘personalized’ (optimized for the community). 20-seconds per user, they went out there and captured where would people go if the ground was shaking now. Nice idea to document that, I thought, one can also detect not-so-smart things that people end up doing. So where would you go if earth started trembling? Your mental map may or may not save your life.

—–
Note to self: edit this post with links to each speaker’s webpage and edit yesterday’s post with links to abstracts from WED session. Also, figure out if the proceedings/full papers are online somewhere already.

Leave a Reply

Your email address will not be published. Required fields are marked *