Who's Chasing Whom?
Below are demonstrations of the various conditions and phenomena discussed in the following presentation:
van Buren, B., and Scholl, B. J. (2017). Who's chasing whom?: Changing background motion reverses impressions of chasing in perceived animacy. Talk given at the annual meeting of the Vision Sciences Society, 5/20/17, St. Pete Beach, FL. Link
What cues signal animacy? Although researchers have traditionally focused on the motions of objects, what may really matter is how those objects move with respect to the surrounding scene. Here we demonstrate how a movement that signals animacy in one context may be perceived radically differently in the context of another scene. Observers viewed animations containing a stationary central disc and a peripheral disc, which moved around it haphazardly. A background texture moved behind the discs. For half of observers, the background moved generally along the vector from the peripheral disc to the central disc (as if the discs were moving together over the background, with the central disc always behind the peripheral disc) and, for the other half of observers, the background moved generally along the vector from the central disc to the peripheral disc. Observers in the first condition overwhelming experienced the central disc as chasing the peripheral disc, while observers in the second condition experienced the reverse. In a second experiment measuring objective detection, observers discriminated displays in which a central ‘wolf’ disc chased a peripheral ‘sheep’ disc from inanimate control displays in which the wolf instead chased the sheep’s mirror image. Although the presence of chasing was always signaled by the wolf and sheep’s close proximity, observers performed well when the background moved along the vector from the sheep to the wolf, and poorly when the background moved in an uncorrelated manner, which controlled for low-level motion. These dramatic context effects indicate that spatiotemporal patterns signaling animacy are detected with reference to a scene-centered coordinate system.
Experiment 1 — Who's Chasing Whom?
In our initial experiment, we were curious whether reversing the motion of a background texture would be sufficient to cause two groups of observers to see chasing in completely opposite ways. Observers viewed single animations containing a stationary central disc and a peripheral disc, which moved around it. A background (a map of the city of Tokyo) moved behind the discs. When the background motion was generally along the vector from the peripheral to the central disc, observers described the central disc as chasing the peripheral disc (click/tap the first bar in the graph below to see an example of an animation from this condition). When the background motion was reversed, observers instead described the peripheral disc as chasing the central disc (see the second bar in the graph below). This is an especially dramatic demonstration of how contextual information can modulate perceived animacy. Indeed, simply changing the motion of a moving background can trigger a complete reversal of perceived agency/patiency in chasing events!Click/tap the the bars to view example displays
Experiment 2 — Chasing Detection (Over Tokyo)
Beyond changing how chasing is seen, can contextual information determine whether chasing is seen in the first place? A new sample of observers performed a difficult chasing detection task in the context of two types of background motion. On trials where chasing was present, a disc in the center of the screen (the 'wolf') chased another, more peripheral disc (the 'sheep') amid distractors. Observers were told that sometimes the background motion would be correlated with the chase, but that other times the background motion would be uncorrelated, and that the following strategy would serve them well regardless of the type of background motion — just look out for a disc in close proximity to the wolf. Since this was a detection task, we also needed displays in which chasing was not visible. We accomplished this by having the wolf chase the invisible mirror image of the sheep, reflected though the vertical midline of the translating background coordinate system. Mirror chasing is a nice control for chasing, because it preserves various low-level motion features while abolishing the chasing percept. Click/tap the first bar of the graph below to see a chasing-present trial in which the background motion is correlated with the chase, as well as a corresponding mirror chasing trial, and a "cheat" video that illustrates how mirror chasing trials were constructed for this condition. Click/tap the second bar of the graph to see a chasing-present trial in which the background motion is uncorrelated with the chase, as well as accompanying mirror chasing and mirror chasing 'cheat' displays. As can be seen from the graph, observers were much better at discriminating chasing from mirror chasing in the Correlated Background Motion condition than in the Uncorrelated Background Motion condition. Although they were encouraged to just use interobject proximity to perform the task, observers' perception of chasing was irresistibly based on how the objects appeared to move with respect to the scene.Click/tap the the bars to view example displays
Experiment 3 — Chasing Detection (Over Grid)
To replicate these results and make sure that they weren't due to the particular map background we had been using, we ran a new version of the experiment using a grid as the background context. Even using this more abstract pattern, we again found that chasing detection was better in the Correlated Background Motion condition, compared to the Uncorrelated Background Motion condition. (Click/tap the bars below to see examples of the displays that were used in this replication.) Although animacy is a property that we see as belonging to particular objects, perceiving animacy clearly involves the integration of motion signals across entire scenes. Note that the dramatic contextual influences reported here make sense - animate entities inhabit the world, and so it is their motion relative to the world (and not with respect to our retina) that determines whether we see chasing.Click/tap the the bars to view example displays