Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
Poster session 3
Time:
Tuesday, 27/Aug/2019:
11:00am - 12:30pm

Location: Jubileumzaal
Naamsestraat 22, 3000 Leuven

Show help for 'Increase or decrease the abstract text size'
Presentations

Hypothyroidism can compromise spatial summation and resolution acuity for S-cone selective stimuli

Kalina Ivanova Racheva1, Margarita Boyanova Zlatkova1,2, Tsvetalin Totev Totev1, Emil Slavev Natchev3, Milena Slavcheva Mihaylova1, Ivan Milenov Hristov1, Roger Sproule Anderson2

1Institute of Neurobiology, Bulgarian Academy of Sciences, Bulgaria; 2School of Biomedical Science, University of Ulster, UK; 3Departament of Endocrinology, Medical University Sofia, Bulgaria

Hypothyroidism affects visual development in rats, causing a thinning of retinal layers, delays to nerve myelination and reduced opsin production. The visual changes associated with hypothyroidism in humans have received little attention, in particular colour vision. We measured acuity at the resolution limit and the area of complete spatial summation (Ricco’s area), known to be related to ganglion cell density, for patients with hypothyroidism and age-matched controls. Stimuli were chromatic isoluminant gratings and spots of variable size, presented at 20 deg in the temporal retina. We used silent substitution with modulation from an achromatic background to 90, 270, 0 and 180 deg in DKL space, loosely called blue, yellow, red and green.

Resolution acuity was significantly lower in hypothyroid patients compared to controls only for blue gratings (0.54 c/deg vs 0.77 c/deg, p < 0.05). Similarly, Ricco’s area was significantly enlarged only for blue stimuli (0.25 deg2 vs 0.036 deg2, p < 0.05) in the hypothyroid group. Similar tendencies were observed for yellow stimuli, but not reaching statistically significance.

The results suggest that hypothyroidism affects blue-yellow spatial characteristics more than red-green. The observed acuity impairment and Ricco’s area enlargement may be a result of S-cone driven ganglion cell loss or dysfunction in hypothyroidism.



A Multilayer Computational Model of the Parvocellular Pathway in V1

Xim Cerda-Company1, Xavier Otazu1, Olivier Penacchio2

1Computer Vision Center, Universitat Autonoma De Barcelona, Spain; 2School of Psychology and Neuroscience, University of St Andrews, Scotland

We defined a novel firing rate model of color processing in the parvocellular pathway of V1 that includes two different layers of this cortical area: layers 4Cβ and 2/3. Our dynamic model has a recurrent architecture and considers excitatory and inhibitory cells and their lateral connections. To take into account laminar properties and the variety of cells in the modeled area, the model also includes both single- and double-opponent simple cells, and complex cells (a pool of double-opponent simple cells). Moreover, the lateral connections depend on both the type of cells they connect and the layer they are in.

To test the architecture, we used a set of sinusoidal drifting gratings with varying spatio-temporal properties such as frequencies, area of stimulation and orientation. We showed that to reproduce electrophysiological observations, the architecture has to include non-oriented double-opponent cells in layer 4Cβ, but no lateral connections between single-opponent cells.

We also tested the configuration of lateral connections by studying their effect on center-surround modulation and showed that physiological measurements are reproduced: lateral connections are inhibitory for high contrast and facilitatory for low contrast stimuli. Finally, we mapped the spatio-temporal receptive fields using reverse correlation and showed that the selectivity of cells’ polarity is time-dependent.



Spectral difference between the ambient light flows reaching the extreme peripheral retina through the pupil and through the exposed scleral surface

Alexander Belokopytov1, Galina Rozhkova1, Elena Iomdina2, Olga Selina2

1Institute for Information Transmission Problems (Kharkevich Institute), Russian Academy of Sciences; 2Moscow Helmholtz Research Institute of Eye Diseases

Color constancy conception implies that perceiving object coloration as invariable requires taking into account spectral characteristics of the ambient light illuminating the observed scene. There is a hypothesis that the characteristics of ambient illumination are mainly assessed on the basis of photoreceptor responses at the extreme retinal periphery where the following two light flows could be distinguished: (1) the light that entered the eye through the pupil and scattered by the eye structures (pupillary flow), and (2) the light that came to the receptors from the illuminated surface through all the eye tunics (diascleral flow). Since it seems problematic to investigate living human eye in this respect, we performed preliminary experiments on rabbit eye ex vivo. Using the spectrophotometer Eye One (X-Rite) and the plastic optical fiber, we recorded the input light (Lo), the light leaving the eye through the optic nerve window (L1) and the light crossing all eye tunics (L2). As was anticipated, the ratios L1/Lo and L2/L1 revealed a significant difference between the spectra of the pupillary and diascleral flows showing that the second one was more reddish. This result seems natural in view of the optical parameters of the two light paths. RFBR-grant 19-015-00396A (partial support).



Color constant representations in early visual cortex

Anke Marit Albers, Elisabeth Baumgartner, Hanna Gertz, Karl R. Gegenfurtner

Justus-Liebig-Universität Giessen, Germany

The light entering our eyes is the product of the illumination and the surface reflectance of an object. Although it changes considerably when the illumination changes, we perceive objects as stable in color. To investigate how the brain achieves color constancy, we measured BOLD fMRI while 19 participants either observed colored patches (yellow, blue) under a neutral illuminant, or neutral grey patches under simulated blue and yellow illumination conditions. Under bluish illumination, the grey patches appeared yellow; under yellowish illumination, they appeared blue.

We trained a classifier to discriminate between the blue- and yellow-colored patches based on the activity pattern in V1-V4. Blue and yellow patches could reliably be discriminated (54.76% - 57.73%). The classifier could also discriminate between the apparent yellow and blue (59.14% - 60.63%). Crucially, we then trained the classifier to discriminate between blue and yellow patches, but tested whether it could distinguish between blue and yellow induced by the colored illuminants. Apparent blue and yellow resembled colorimetric blue and yellow in V1 (54.30%), V3 (52.57%) and V4 (52.46%).

These findings suggest that not only colorimetric, but also apparent color is represented to some degree in retinotopic visual cortex, as early as in V1.



Evaluation of color-vision deficiency test based on pupil oscillations

Yuta Suzuki1, Kazuya Onodera1, Tetsuto Minami1,2, Shigeki Nakauchi1

1Toyohashi university of technology, Japan; 2Electronics-Inspired Interdisciplinary Research Institute, Toyohashi University of Technology, Japan

The Pupil Frequency Tagging (PFT), pupil oscillations by modulating stimulus luminance level of objects, have been used in the tracking of an attentional shift. We advanced the PFT for an evaluation of color-vision deficiency in order to use color changes in equiluminant displays instead of luminance changes. Here, we used the flickering stimuli imitated from Ishihara pseudo-isochromatic plates, each of which contains three luminance levels of green or red colored dots on the color confusion lines; the stimulus out of five types of color contrast within green/red was selected from the subject’s subjective color discrimination threshold plus two fixed contrast levels (i.e. seven different distances on the color confusion lines). The stimulus was flicked at 1 Hz from green to red pattern and vice versa while monitoring participant's pupil changes with an eye-tracker. The color-vision deficiency threshold was predicted by the similarity between the pupil oscillations to the fixed contrast level and each variable contrast. The predicted threshold was significantly related to subjective color discrimination. This novel classification method based on the photoreceptors dependence pupillary oscillations characterized subject’s color-vision deficiency as well as the extent, without any subjective tests.



How many component (unique) hues can dichromats see?

Alexander Logvinenko

Glasgow Caledonian University, United Kingdom

According to the model of dichromatic colour vision proposed recently (Logvinenko, 2014), the dichromatic hue palette differs significantly for object and light colours. This may explain why there is no consensus on what colours dichromats see. We explored the object hue palette. A set of Munsell chips was chosen, which should be equally perceived by dichromats and trichromats. These chips clearly contain the red, green and blue component hues. As to green, it was tinged with such an amount of white that it was hard to judge its presence even for trichromatic observers. We used the hue scaling method to evaluate the amount of all six component hues for each chip in the sample. Trichromatic observers were asked to evaluate, in percent, how much of each component hue they saw in the chip. We found that although the amount of green was low, its presence for some chips was statistically significant. Thus, all the six component hues are present in the hue palette of dichromats. We also confirmed the opponency of black and white, which were never present together in any chip. This is contrary to the generally accepted view that grey is a mixture of black and white.



The screening program for detecting colour vision deficiencies based on a colour blindness simulator: preliminary study

Paul V. Maximov1, Maria A. Gracheva1, Anna A. Kazakova1,2, Alexander S. Kulagin3

1Institute for Information Transmission Problems of the Russian Academy of Sciences (Kharkevich Institute), Moscow, Russian Federation; 2Pirogov Russian National Research Medical University (RNRMU), Moscow, Russian Federation; 3Moscow State Budgetary Educational Institution “School № 1501”, Moscow, Russian Federation

We have developed the screening program for colour vision deficiencies. The program is based on the colour blindness simulator presented earlier (ECVP 2018). Three images – full colour one, simulated “deuteranopic” and “protanopic” images – are displayed simultaneously. The task for the subject is to pick the most different image among the set of three images. Normal trichromats select the original picture as the most different one, protanopes select “deuteranopic” image, and deuteranopes select “protanopic” image.

81 children (9-17 years old, 26 males, 55 females) and 2 adults (males) were tested. We assessed colour vision with Rabkin polychromatic test plates, and with our program. For the program we used ASUS UX305 with anti-glare IPS-screen. In both tests we assessed subjects who make zero mistakes in all images as “normal”, others - as “abnormal” (anomalous trichromats and dichromats). 7 subjects were identified by the Rabkin test as “abnormal”.

Comparing to the Rabkin test, the screening program has sensitivity - 71%, specificity – 100%. It seems that increasing the number of test images for each subject (we used 11) may increase the sensitivity.

Our screening program seems to be a promising new method for detecting colour deficiencies, though further studies on bigger samples are needed.



Perceptual Accuracy of a Spectrally- and Physically-Based Rendered Cornell Box vs a Real Cornell Box

Gareth V. Walkom, Peter Hanselaer, Kevin A.G. Smet

Light & Lighting Laboratory, KU Leuven, Ghent, Belgium

The Cornell box has been used throughout computer graphics to show the interaction of light in computer renderings. However, it is currently unknown how this corresponds to that of a real Cornell box. In this project, test subjects will visually compare a real Cornell box to a simulated box and rate the perceived differences for the different materials in terms of brightness, colorfulness, and hue. The real Cornell box will be built based on characteristics reported in literature, with walls and objects layered with uniformly colored paper. The real box and its materials will be optically characterized. A colorimetric accurate simulation of the box will then be rendered in Mitsuba, a state-of-the-art spectral and physical based renderer (SPBR). Colorimetric accuracy will be checked using XYZ tristimulus maps obtained with a TechnoTeam LMK-5 Color Luminance Camera and by measuring the spectral irradiance at several locations using a GigaHertz Optik BTS256E spectral irradiance meter. Considering the colorimetric accuracy, perceptual accuracy, determined in the visual experiment, will be characterized. Determining the perceptual accuracy of the current state-of-the-art SPBR could greatly advance research in lighting visualization and also other fields such as computer graphics.



Critical luminance to perceive an object as a light source or a reflective object: can it predict spatial brightness?

Ching-wei Lin, Peter Hanselaer, Kevin Smet

KU Leuven, ESAT/Light&Lighting Laboratory, Ghent, Belgium

Depending on surround conditions, a dim light source can be perceived as a reflective object, and vice versa, a bright reflective object as a light source. The critical luminance for an object to be perceived as self-luminous instead of reflective is defined as GL. However, there are no good models to predict GL as far as we know. We assume the critical luminance is affected by spatial brightness and will conduct a series of experiments to test our assumption. A uniform, diffuse, luminance-tuneable sphere and several colourful semi-translucent pictures illuminated from the back with a tuneable light source will be used as probes to test the critical luminance in various types of rooms. Spatial brightness perception as well as the room’s luminance distribution as seen from the observer’s position will be collected at the same time. With this data, we aim to determine and model the relationship between the luminance distribution, GL and the perceived room brightness. Detailed results and conclusions will be reported in the full paper. It is hoped that new insights will be gained on the factors driving perceived room brightness, and whether GL can be a good predictor and how it can be best modelled.



Visual perception in automotive: Testing the glare effects of new car headlamps

Lucie Viktorová1, Ladislav Stanke2

1Department of Psychology, Faculty of Arts, Palacký University Olomouc, Czech Republic; 2Hella Autotechnik Nova, s.r.o., Czech Republic

With the argument of increasing traffic safety by better road illumination, halogen bulbs in modern car headlamps are replaced by xenon arc lamps and most recently by LEDs. Yet at the same time, more drivers seem to complain about being glared, which might be a risk factor for safe driving. The studies of glare effects performed so far usually suffer from low number of participants and/or only taking subjective statements about being glared into account. The aim of our current research is to study subjective, as well as objective physiological/ psychophysical effects of glare by different light sources on the observer in the context of traffic safety. The poster introduces the proposed experimental research design and presents the process of creating a special laboratory – a darkroom simulating two-lane traffic with different car headlamps. Multiple light sources and headlamp designs will be used, including modifications that are not seen on our roads, to simultaneously assess current status and prove whether changes in headlamp design can lead to improved user experience. The assessment will be supported by special measuring instruments, both commercially available and of open-source design.



Influence of local chromatic configuration on gloss perception

Tatsuya Yoshizawa1, Haruyuki Kojima2

1Department of Human Sciences, Kanagawa University; 2Department of Psychology, Kanazawa University

Our previous studies showed no difference between yellow and gold in the performance of color perception such as color detection and color search, and even ERPs for those colors. However, it has been known that in general the perception of glossiness is influenced by statistical features. This implicitly indicates that color perception of shiny objects like gold is described by them rather than local cooccurrence of spatial configuration of its image such as luminance and chromaticity of adjacent areas in a glossy-object image. We therefore psychophysically tested whether the color perception of glossy objects is affected by such local information of adjacent regions of a glossy-object image. Observers with normal color vision judged whether glossiness was perceived in an object-image with which some pixels were randomly shuffled with 20 to 80%. As a control condition we asked the observers the same examination for an image of non-glossy objects as a function of pixel randomizing rates and of the spatial resolution of the image. The observers perceived little glossiness for a glossy-object image with 80-percent randomization at a low resolution, indicating the statistical features is relatively robust, although local luminance and chromaticity information also have an influence on glossy color perception.



Comparing scaling methods in lightness perception

Shaohan Li1, Bernhard Lang1, Guillermo Aguilar2, Marianne Maertens2, Felix A. Wichmann1

1Eberhard Karls Universität Tübingen, Germany; 2Technische Universität Berlin, Germany

Psychophysical scaling methods measure the perceptual relations between stimuli that vary along one or more physical dimensions.

Maximum-likelihood difference scaling (MLDS) is a recently developed method to measure perceptual scales which is based on forced-choice comparisons between stimulus intervals. An alternative scaling method that is based on adjusting stimulus intervals is equisection scaling. In MLDS, an observer has to answer which of two shown intervals is greater. In equisection scaling the observer adjusts values between two anchoring points such that the resulting intervals are perceived as equal in magnitude.

We compared MLDS and bisection scaling, a variant of equisection scaling, by replicating a lightness scaling experiment with both methods. Bisection scaling is attractive because it requires less data than MLDS. We found that, qualitatively, the lightness scales recovered by each method agreed in terms of their shape. However, the bisection measurements were more noisy. Even worse, scales from the same observers but measured in different sessions sometimes differed substantially.

We would therefore not advise to use equisection scaling as a method on its own. But we suggest that it can be usefully employed to choose more favourable sampling points for a subsequent MLDS experiment.



Illusory contrast enhancement by a dark spot in skin-like color gradation

Soyogu Matsushita1, Sakiko Kikunaga2, Junya Aoyama2, Tsuyoshi Nomura2

1Osaka Shoin Women's University, Japan; 2Pias Group Central R&D Laboratory

A dark spot illusory enhances the perceived contrast of sinusoidal gratings of the adjacent area. While previous studies tested this illusion with grayscale stimuli, this study investigates the illusion with a human skin-like color. The brightest and darkest colors of the sinusoidal gratings were sampled from the skin of a portrait photograph of an actual person. The color of the spot as a contrast enhancer was selected from possible colors for human facial parts or usual cosmetics. The results replicated the illusory contrast perception also when the stimulus consisted of a skin-like color. We speculate that some facial parts with a darker luminance such as the eyebrows and lips could influence the perception of shading of faces, thus affecting perceived masculinity and/or maturity.



When articulation does not enhance lightness contrast

Giuseppe Alessio Platania1, Sabrina Castellano1, Tiziano Agostini2, Giulio Baldassi2, Alessandro Soranzo3

1Università degli Studi di Catania, Italia; 2Univesità degli Studi di Trieste, Italia; 3Sheffield Hallam University, United Kingdom

Simultaneous lightness contrast (SLC) is the condition whereby two equal greys look different when they are placed one against a dark background and the other against a bright background. Adelson (1993) noticed that the SLC magnitude increases when the homogeneous backgrounds are replaced with more articulated ones. In Adelson's display, all darker patches are on one side of the stimuli whilst the brighter are on the other. The aim of this research is to test whether this regularity causes the SLC magnitude to increase. On a paper-based experiment, participants were requested to match on a Munsell scale two greys placed against a dark and a white background while the luminance of additional elements was manipulated: dark and bright elements could have been added to either side. Results show that when bright elements where added to the darker background and bright elements where added to the darker background the SLC magnitude reduced. Vice-versa, when bright elements were added to the bright background, and dark elements were added to the dark background, the SLC magnitude increased. It is concluded that the photometric relationships in the stimuli determine the SLC magnitude, not the level of articulation per se.



Transparent layer constancy in naturalistic rendered 3D scenes

Charlotte Falkenberg, Franz Faul

Universität Kiel, Germany

In previous work on the perception of thin coloured transparent layers, we observed only relatively small degrees of constancy across illumination changes. This may partly be due to the fact that we used strongly reduced 2D stimuli, as it is known from other domains of perception, e.g. size or object colour perception, that an enriched context often leads to an increase in constancy. To test this hypothesis, we used an asymmetric matching task to measure transparent layer constancy (TLC) in scenes with varying levels of complexity: we presented filters in differently illuminated parts of 'naturalistic' rendered 3D scenes, which contained multiple illumination cues like scene geometry, surface shading, and cast shadows. To isolate the effects of specific cues on the degree of constancy, we stepwise omitted single cues. In the most reduced condition, a simple 2D colour mosaic remained, which was colourimetrically identical to the corresponding 3D scene. The results suggest that TLC is indeed enhanced in naturalistic scenes, which is in line with findings of comparable investigations in the domain of colour constancy. An explanation for this increase in TLC might be that the perceptual affiliation of a filter to a particular illumination framework is enhanced in naturalistic scenes.



Measurement of the perceived size of the face by the cheek color

Emi Nakato

Ehime Prefectural University of Health Sciences, Japan

Previous study showed that the perceived size of the face image by means of the line-straight shape of the cheek blush was smaller than other shapes (Nakato &Shirai, 2017). Although Kobayashi et al (2017) revealed that lip color influenced the perceived facial skin lightness, there are very few studies which examine how facial color by other cosmetics causes the perceived size of the face images. This study investigated whether cheek color influences the perceived size of an illustrated face.

Illustrated facial images with four kinds of cheek color (red, pink, purple, and brown) were used as standard stimuli and an illustrated facial image without cheek color as the comparison stimulus.

Participants were instructed to manipulate a computer mouse and to stop the computer mouse when they judged that the facial size of the comparative stimulus was perceived to be the same as that of the standard stimulus by the method of adjustment.

The results showed that the facial size with brown cheek color was perceived to be smaller than without cheek color. This finding implies that the darker cheek color is a determinant of the appearance of perceiving a smaller face.



The effect of context in judgements of face gender

Alla Cherniavskaia, Valeria Karpinskaia, Natalia Romanova-Africantova

St. Petersburg State University, Russian Federation

Although the characteristics of individual faces, such as identification of gender, have been studied extensively, questions remain about the effect of context in judgements of face gender. We examined how the perception of an ambiguous, composite face (an image morphed between a male and a female face) is influenced by the context of a surrounding group of faces. 74 naive participants evaluated the gender of each morphed image using a 6-point scale. We calculated a context effect measure by subtracting the gender rating of a particular face when seen in the context of a group of male faces from the gender rating when seen in the context of female faces. Our results showed that there was a context effect in the gender judgements of the composite faces for 35% of participants. Of those participants, 79% showed an assimilation effect, in which the gender rating of the ambiguous face was shifted towards the gender of the surrounding faces and 21% showed a contrast effect, in which the gender rating of the ambiguous face was shifted away from the gender of the surrounding faces. 90% of those shifts involved a perceived gender shift from female to male (or vice versa).



A divided visual field approach to the categorical perception of faces

Ana Chkhaidze, Lars Strother

University of Nevada, Reno, United States of America

The perception of boundaries between stimuli that exist along a graded continuum of physical properties is referred to as categorical perception. Categorical perception is often interpreted as evidence that language influences perception. Consistent with this, divided field studies of color and shape perception showed a relationship between categorical perception and cerebral laterality for language. Unlike color and shape perception, face recognition is associated with right-lateralized circuits in visual cortex and beyond. We hypothesized that the well-known left visual field (LVF) advantage for face recognition would show modulation by categorical versus non-categorical face perception. In three experiments, we used a divided field method in which observers performed a visual search task on arrays of faces split between the LVF and the right visual field (RVF). The search tasks required visual discrimination of faces by virtue of either identity, gender or both. Our results confirmed the existence of categorical face perception in all three types of task. Crucially, however, we found greater categorical perception of identity for LVF faces and the opposite (RVF) for categorical perception of face gender. Our findings show that categorical effects on face recognition depend on opponent cerebral laterality for language and the visual processing of faces.



Face or flower? Hemispheric lateralisation for the perception of illusory faces

Mike Nicholls1, Ashlan McCauley1, Owen Gwinn1, Megan Bartlett1, Simon Cropper2

1Flinders University, Australia; 2Melbourne University, Australia

Pareidolia is the illusionary perception of faces in meaningless stimuli. The current study investigates whether the predisposition to see faces, when there is none, is lateralised to the right cerebral hemisphere. It was predicted that the right hemisphere would be more prone to false positives than the left hemisphere. Normal right-handed undergraduates participated in a forced choice signal detection task where they determined whether a face or flower was present in visual noise. Information was presented to either the left or right hemispheres using a divided visual field procedure. Experiment 1 involved an equal ratio of signal to noise trials. Experiment 2 provided more opportunity for illusionary perception with 25% signal and 75% noise trials. There was no asymmetry in the ability to discriminate signal from noise trials for both faces and flowers. Response criterion was conservative for both stimuli and the avoidance of false positives was stronger in the left- than the right-visual field. These results were the opposite of that predicted and it is suggested that the asymmetry is the result of a left hemisphere advantage for rapid evidence accumulation.



Face me to remember you! Effect of viewpoint on male and female memory for faces.

Aneta Toteva1, Ivo D. Popivanov1,2

1New Bulgarian University, Bulgaria; 2Medical University of Sofia, University Hospital "Alexandrovska"

Several studies have shown that women outperform men in tasks involving memory for faces. This effect has been demonstrated in children and adults for en-face photographs of faces from the same or different ethnic groups. On the other hand, some studies reported male advantage in spatial tasks, such as mental rotation.

In this study we assessed 27 male and 23 female participant’s memory for 24 frontal, semi-profile and profile views of faces in attempt to check whether viewpoint may interact with the gender difference in face memory. The memory for faces was assessed in a recognition task including 24 new faces 10 min after a learning phase, presenting the faces twice. Additionally we estimated the mental rotation and face discrimination abilities of the participants.

The results showed that although non-frontal views were less remembered in general, women had a marked advantage in face recognition in comparison to men. Neither own- or other-sex bias were demonstrated.

Interestingly, in our sample man and women did not differ in their mental rotation abilities, however female participants showed better face discrimination performance, estimated with the Benton Face Recognition Test. Thus, more optimal face encoding might result in female face memory advantage.



The orientation inversion effect for 3-D concave faces extended to convex faces

Thomas V Papathomas, Steven Silverstein, Attila Farkas, Hristiyan Kourtev, John Papayanopoulos, Brian Monteiro

Rutgers University, United States of America

Introduction: The hollow-face illusion (HFI) refers to the phenomenon of perceiving 3-D hollow faces as normal convex faces. HFI is much stronger for upright than upended stimuli (orientation inversion effect (OIE)).

Methods: We displayed stereoscopic pairs of 11 face stimuli: photographs of people, realistically painted masks or unpainted masks; each was shown in 4 combinations (ux, dx, uv, dv): 2 orientations [upright (u)/upended (d)] x 2 geometries [convex (x)/concave (v)]. Participants reported the perceived geometry using 5 choices: concave, somewhat concave, flat, somewhat convex, convex. We processed the data to obtain 6 figures of merit: (a) the strength of the HFI for concave stimuli (uv, dv, v); (b) the ability to correctly perceive convex stimuli (ux, dx, x).

Results: Beyond confirming OIE for concave stimuli, the novel finding is that there is also an effect of orientation for convex stimuli: correct responses are higher for upright faces.

Conclusions: One possible explanation is that the influence of stored knowledge - that faces are convex - accounts for the paradoxical results that humans are better at obtaining the true geometry of hollow masks for upended stimuli, whereas they are worse at obtaining the true geometry of convex masks for upended stimuli.



Face adaptation effects on non-configural information

Ronja Mueller1, Sandra Utz2, Claus-Christian Carbon2, Tilo Strobach1

1Medical School Hamburg, Germany; 2University of Bamberg

Previously inspected faces can affect the perception of faces seen subsequently. The underlying mechanisms of these face adaptation effects (FAEs) have been considered to be based on sensory adaptation processes. This sensory oriented, short-term view on such adaptation effects was challenged by recent studies employing famous faces which show very reliable and robust adaptation over longer periods of times (hours and days). After 20 years of intense research on FAEs, our knowledge is still quite limited in terms of which qualities of a face can be adapted as most studies used configurally manipulated stimuli (i.e., mostly addressing 2nd-order relations). Here, we investigated less understood adaptation effects on non-configural face information by utilizing alterations which do not change configural aspects of a face by manipulating color brightness and saturation. Results of our studies provide evidence for non-configural color adaptation effects which seem to be unique within the context of faces. This supports the view that FAEs are not limited to configural qualities of a face.



Face processing in V1: coarse-to-fine?

J.P. Schuurmans1, T. Scholts2, V. Goffaux1,2,3

1Psychological Sciences Research Institute (IPSY), UC Louvain, Louvain-la-Neuve, Belgium; 2Department of Cognitive Neuroscience, Maastricht University, Maastricht, The Netherlands; 3Institute of Neuroscience (IONS), UC Louvain, Brussels, Belgium

Coarse-to-fine models propose that primary (V1) and high-level visual regions interact over the course of processing to build-up progressively finer representations. We previously observed that a high-level face-preferring region integrates face information in a coarse-to-fine manner. Whether V1 contributes to coarse-to-fine processing remains to be determined. To address this, we re-analysed the data of our past fMRI experiment, in which intact and scrambled faces were presented in three spatial frequency (SF) ranges (low, middle, high) for three durations (75, 150, 300ms). We localized individual V1 based on an anatomical atlas combined with a functional localizer. Next, we conducted a univariate analysis of the average response in this region and submitted the beta values to a repeated measure ANOVA. Overall, V1 response decayed as a function of exposure duration. The response to the coarse low SF input drastically decayed between 75 and 150ms post-stimulus onset and bounced back to initial response level at 300ms of exposure. The decay of V1 response to middle and high SF was shallower and more linear. V1 response was comparable across between intact and scrambled stimuli. Multivariate pattern analyses are needed for a finer-grained investigation of the spatiotemporal dynamics of SF integration in the V1.



Beyond binary face recognition tasks: The effects of familiarity on sensitivity to subtle face changes

Rosyl Selena Somai, Peter Hancock

University of Stirling, United Kingdom

Recently, Abudarham and Yovel [preprint] found there are no differences in feature hierarchy between familiar and unfamiliar faces in a face identity task. Although the feature hierarchy is not affected by familiarity, the memory accuracy of the features in the hierarchy might be. Theoretically, familiarity could aid or disrupt face perception. Familiarity could aid the comparison of two faces by using the memory of familiar face to enrich the image currently in visual working memory (VWM) with more details to compare. Alternatively, this ‘enrichment’ has the potential to disrupt the comparison, by overwriting VWM content of the presented image with the visual information retrieved from the memorized face. The current study proposes a novel experimental design that allows participants to gradually adjust the appearance of a face to match the original image, allowing us to measure the accuracy of face perception. We studied effects of familiarity on sensitivity to two types of adjustments, eyebrow and lip thickness (high perceptual sensitivity according to Abudarham & Yovel) and eye distance and mouth width (low perceptual sensitivity). Preliminary results indicate that both perceptual sensitivity and familiarity have an influence on the accuracy of our face perception.



Metacognition of face identification: perspective from multiple face processing

Luyan Ji, William G. Hayward

The University of Hong Kong, Hong Kong S.A.R. (China)

Individuals can extract summary statistics from multiple items. However, the metacognition of ensemble perception is largely unstudied. In this study, we used a member identification task to explore whether observers have insight into implicit average face processing. Participants first saw a group of four faces presented for either 2s or 5s, then they were asked to judge whether the following test face was present in the previous set. The test face could be one member of the set, the matching average of the four studied faces, an unstudied face, or the non-matching average of four unstudied faces. After each response, participants rated their confidence. Replicating previous results, there was substantial endorsement for matching average faces, even though they were never present in the set. Metacognition, operationalized as the correlations between accuracy and confidence, improved with increasing duration for identifying unstudied but not studied faces. Importantly, participants were confident when judging the unseen matching average faces to be present, with confidence-accuracy relations at similar levels to that when endorsing matching member faces. The results suggest that average faces might be stored in sensory memory along with individual faces, and metacognition of face identification was different between target-present and target-absent conditions.



Visual search with deep convolutional neural network

Endel Põder

University of Tartu, Estonia

Visual search experiments with human observers have revealed that simple features (luminance, color, size, orientation) can be detected in parallel across the visual field, independent of the number of objects in a display. Detection of combinations of simple features is more difficult and may need serial processing.

Deep convolutional neural networks follow roughly the architecture of biological visual systems and have shown performance comparable to human observers in object recognition tasks.

In this study, I used a pretrained deep neural network Alexnet as an observer in classic visual search tasks. There were four simple tasks, with targets of either different luminance, color, length, or orientation, and one complex task (rotated Ts), where target differs from distractors by spatial configuration of two bars. Set-size (number of displayed items) and difficulty level (target-distractor difference, or size of stimuli) were varied.

The results were different from usual human performance. It appears that there is no difference between searches for simple features that pop out in experiments with humans, and for feature configurations that exhibit strict capacity limitations in human vision. Both types of stimuli revealed moderate capacity limitations in the neural network tested here.



Unsupervised learning of viewpoints

Frieder Hartmann, Katherine R. Storrs, Yaniv Morgenstern, Roland W. Fleming

Justus-Liebig-Universität Gießen, Germany

How does the visual system represent relationships between different views of 3D objects, when it only has access to 2D projections? We rendered a dataset of 2D silhouettes of 3D shapes from different viewpoints, and evaluated and contrasted different strategies (unsupervised machine learning vs. pixel-based metrics) on how well they capture similarity relationships among the images. We trained a variational autoencoder (VAE) on the dataset and derived a metric of viewpoint difference from the resulting latent representations of pairs of images. We find that this metric meaningfully represents differences in viewpoint such that different viewpoints of the same 3D shape are organized in a structured way in the VAE’s latent code. We contrast this with a simple pixel-based image similarity metric. Results indicate that the pixel-based metric is prone to artefacts introduced by inconsistent rates of image change between viewpoints. We compare both metrics to human judgments. Using a rank order task and a multi-arrangement task, we investigate which model best predicts how humans perceive viewpoint differences. We discuss the implications of the results on human representation of 3D shape.



Modelling Human Recognition of Glossy Highlights with Neural Networks

Konrad E. Prokott, Roland W. Fleming

Justus-Liebig-Universität Giessen, Germany

With recent advances in machine learning there have been many claims about the similarities between human perception and the computations and representations within neural networks. At the same time there have been many observations of the striking differences. We aim to use machine learning to imitate human perception in the context of highlight recognition – i.e. determining whether a bright point on a surface is a specular highlight as opposed to a surface texture marking. We created a dataset of 165 000 computer rendered greyscale images of perturbed surfaces that are textured but also display glossy highlights. We generated predictions based on the ground truth of the specular component of these images and a simple model that used only an intensity threshold. We identified individual locations in the images that distinguish between the models, and asked human observers to judge whether these points depicted texture or a highlight. We compared responses of human observers to the two predictors across different spatial frequencies of surface geometries as well as different texture patterns. We then searched for neural networks that show a similar pattern of responses. Over a range of conditions, the networks predict human judgments better than the threshold model does.



Classification of spatially modulated textures by convolutional neural network

Denis Yavna, Vitaly Babenko, Kristina Ikonopistseva

Southern Federal University, Russian Federation

Our work is devoted to the modeling of second-order visual mechanisms that detect spatial modulations of brightness gradients. We investigated the ability of neural networks with different convolutional parts to distinguish spatial modulations in textures.

Networks were trained on images commonly used in psychophysical studies. These are textures synthesized from Gabor micropatterns modulated in contrast, orientation, and spatial frequency with randomly varied parameters. 15,000 images belonging to three classes were produced: 70% for training, 15% for validation, and 15% for testing.

Networks were implemented using Keras library. A fully connected part always included a hidden layer of 32 elements and an output layer of three neurons. The learning capabilities of the networks with 3-5 convolutional layers were tested. At the moment, only the network with a five-layer convolutional part has demonstrated learnability (testing accuracy is 98.37%). There are 64 filters in each layer; filter sizes are 3x3 pixels in layers 1-2, 5x5 in layers 3-4, 7x7 in layer 5. Each convolutional layer is followed by 2x2 max pooling layer.

There is a similarity between the heatmaps of gaze-shifting data obtained previously in texture identification tasks and the class activation maps visualized using Grad-CAM procedure.

Supported by RFBR, project No 18-29-22001



Modelling human time perception based on activity in perceptual classification networks

Warrick Roseboom

University of Sussex, United Kingdom

Knowledge and experience of time are core parts of conscious, complex behaviour. Popular approaches to understanding human time perception focus on describing putative neural mechanisms to track objective time. In contrast, experience is characterised by deviations from veridicality – ‘time flies when you’re having fun’. We recently proposed a model of time perception built on tracking salient changes in perceptual classification. Saliency was defined as relatively large changes in network activation across layers of a deep convolutional image classification network. Similar to human vision, lower network layers are selectively responsive to less complex features, such as edges, while higher layers are selective for more object-like patterns. Against human reports regarding dynamic videos (1-64 seconds), model time estimates reproduce several qualities, including regression to the mean, variance proportional to magnitude, and dependency on scene content. Ongoing work further validates model performance using fMRI to track changes in BOLD activity in visual processing areas (V1->IT) while participants view dynamic videos. Preliminary analyses support our primary presupposition that more activity across these perceptual processing areas is related to longer duration estimates, with further, specific model-based hypotheses currently under evaluation. These convergent lines of evidence support this new approach to understanding time perception.



Frequency-based object identification. Exploring spectral analysis as a means of simulating human perception
 in visual systems

Jonas Martin Witt1, Claus-Christian Carbon1,2,3

1University of Bamberg, Germany; 2Research Group EPÆG (Ergonomics, Psychological Æsthetics, Gestalt), Germany; 3Bamberg Graduate School of Affective and Cognitive Sciences (BaGrACS), Germany

In spite of the striking successes of applied artificial intelligence, learning algorithms themselves remain wide off the mark with respect to human perceptual processing. Autonomous machine vision lacks fast feature extraction, is unable to effectively generalize learning across domains, and depends on vast datasets for training. In order to overcome some of these limitations, theories of human information processing are worth studying closely. We explore their application to existing computer vision processing through the methods of spectral analysis. With this approach, we outline core concepts behind human generalization capabilities regarding visual object recognition. Emphasis is placed on a psychological model of feature detection and its realization in form of periodic wave structures. We propose visual classification in spectral fully-connected layers. The procedure is evaluated within a supervised learning task for the classification of traffic signs in the Belgium Traffic Sign dataset for Classification (BTSC). The results support the assumptions of a frequency-based representation of visual information in machines (R2=.67), compared to a pixel-based representation (R2=.35). The system’s performance highlights the importance of an adaptive manipulation of the frequency domain in modern visual agents.



A Neural Network Model of Object-based Attention and Incremental Grouping

Dražen Domijan

University of Rijeka, Croatia

A model of the recurrent competitive map is developed to simulate the dynamics of object-based attention and incremental grouping. The model is capable of simultaneous selection of arbitrary many winners based on top-down guidance. In the model, local excitation opposes global inhibition and enables enhanced activity to propagate within the interior of the object. The extent of local excitatory interactions is modulated in a scale-dependent manner. Furthermore, excitatory interactions are blocked at the object’s boundaries by the output of the contour detection network. Thus, the proposed network implements a kind of multi-scale attentional filling-in. Computer simulations showed that the model is capable of distinguishing inside/outside relationships on a variety of input configurations. It exhibits a spatial distribution of reaching times to points on the object that is consistent with recent behavioral findings. This means that the speed of activity propagation in the interior of the object is modulated by the proximity to the object’s boundaries. The proposed model shows how elaborated version of the winner-take-all network can implement a complex cognitive operation such as object-based attention and incremental grouping.



Neural dynamics of the competition between grouping organizations

Einat Rashal1,2, Ophélie Favrod1, Michael H. Herzog1

1Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland; 2Department of Experimental Psychology, University of Ghent, Belgium

Neural dynamics of the competition between grouping organizations have been studied to a limited extent. The paradigms used so far confounded grouping operations with task demands, using explicit reports of the predominantly perceived organization and biasing attention towards one grouping principle. The current study explored the effect of grouping strength on ERPs elicited for conflicting grouping principles using a primed-matching paradigm, where the grouping display was irrelevant to the task. In Experiment 1, proximity was pitted against brightness similarity in a conflicting columns/rows organization. Competition level was manipulated by increasing grouping strength of one principle or the other. In Experiment 2, proximity was presented alone, or in a weak/strong competition with size similarity. If conflicting organizations result in a hybrid representation, modifications would be evident for different degrees of grouping strength at early perceptual components. However, a competition-related component would appear in a later stage of processing, showing a difference between conflict and non-conflict conditions. We found no evidence for a competition specific component but did find modulations to the ERP waveforms at around 100-250ms from target onset. These results suggest that when grouping principles are in conflict, they produce a hybrid representation of the dominant and non-dominant organizations.



Proximity-induced perceptual grouping of random dot patterns in the presence of a tilting frame

Arefe Sarami1, Johan Wagemans2, Reza Afhami1

1Department of Arts, Tarbiat Modares University, Tehran, Iran; 2Brain & Cognition, University of Leuven (KU Leuven), Leuven, Belgium

The objective of this study was to investigate the effect of a tilted frame on the proximity-induced perceptual organization of random dot patterns. Ten random patterns of 9 dots were generated. For each pattern, a rectangular frame was tilted at 7 angles around the dots. In ongoing experiments, for each set of 70 randomly-ordered stimuli, 10 observers indicate the groups of dots in each stimulus, and each observer completes four sets of stimuli. This results in 28 grouping reports per dot pattern per observer. We randomly split the reports in two report sets and within each report set, we calculate the frequencies with which each dot-pair was placed in the same group. Chi-square and correlation independence tests between frequencies from the two sets of reports are used to measure the within-subject consistency of grouping. For all the 28 reports from each participant, we also calculate the frequency with which all dot-pairs were placed in the same groups. Independence tests between frequencies from each participant and those from other participants are used to measure the between-subject consistency of grouping. Replication with patterns of 18 points is conducted to explore the effect of dots density on the grouping consistency.



Perceiving 3D Mirror- and Rotational-Symmetry

Maddex Farshchi, Tadamasa Sawada

National Research University- Higher School of Economics, Moscow, Russian Federation

The human visual system is very sensitive to the 3D mirror-symmetry of an object’s shape. This is fortunate because 3D mirror-symmetry serves as the critical a priori constraint for perceiving a shape veridically. Note that this beneficial effect of 3D mirror-symmetry on visual perception can be attributed exclusively to its geometrical properties. Also note that 3D rotational-symmetry has geometrical properties analogous to those possessed by 3D mirror-symmetry. This makes it possible to postulate that 3D rotational-symmetry may also affect human perception in the same way. This possibility was studied by comparing a human observer's perception of 3D mirror-symmetry with perception of 3D rotational-symmetry. We required our observers to discriminate 3D symmetric and 3D asymmetric pairs of contours under both monocular and binocular viewing conditions. We found that only the 3D mirror-symmetry discrimination was reliable. With monocular viewing, the discrimination of 3D rotational-symmetry was near chance-level. Performance was slightly better with binocular viewing for both 3D mirror- and 3D rotational-symmetry but performance with 3D rotational-symmetry was not sufficiently reliable to be taken seriously. These results suggest that the human visual system processes these two types of symmetry very differently despite the fact that they are geometrically analogous to one another.



Synergy of spatial frequency and orientation bandwidth in texture segregation

Cordula Hunt, Günter Meinhardt

Johannes Gutenberg-Universität Mainz, Germany

For a multitude of visual features, synergy has been shown, among those spatial frequency and orientation in Gabor random fields. We used noise textures filtered with a Gabor kernel controlled in base frequency and with random orientation to study the bandwidths of frequency and orientation. In a detection and identification experiment, we increased bandwidth in a target or the background by manipulating the Gaussian window of the Gabor. Our results show that both bandwidths exhibit feature-typical behavior as well as synergy if modified jointly. For detection the d’ difference between double-cue performance and prediction of orthogonality (DO) is very similar irrespective of whether target or background were modified (DO of 1.0 or 0.89; Cohen’s d of 2.69 or 2.19). Interestingly, for identification there is a marked difference in DO depending on the actual bandwidth of the target (DO of 1.29 or 0.59; Cohen’s d of 2.53 or 1.37). Our results indicate that the salience of the target does not seem to depend on the absolute bandwidth, but the absolute difference between target and surround bandwidth. Target identification, however, is strongly influenced by absolute bandwidth.



Overlapping surfaces are not necessary for overestimation of the number of elements in a three-dimensional stimulus

Saori Aida1, Yusuke Matsuda2, Koichi Shimono2

1Tokyo University of Technology, Japan; 2Tokyo University of Marine Science and Technology, Japan

Elements in a stereoscopic three-dimensional stimulus that depicts parallel, overlapping, and transparent surfaces is perceived to be more numerous than those in a stereoscopic two-dimensional stimulus that depicts a single flat surface, even when both have the same number of elements. We investigated, via two experiments, the hypothesis that the visual system takes into account elements that are "potentially" occluded by the front surface and exists between the overlapping surfaces in estimating the number of elements contained as a whole. We used three types of random dots stereoscopic 3-D stimuli: a stereo-transparent stimulus, which depicted two parallel-overlapping surfaces, a stepwise stimulus, which depicted two non-overlapped surfaces each at different depths, and a "lump" stimulus, which depicted a volume but not surfaces. Experiment 1 revealed that when the disparity of elements was small, the number of elements in a stepwise stimulus was overestimated in the same manner as in a stereo-transparency stimulus. Experiment 2 revealed that the total number of elements in the lump stimulus was overestimated irrespective of the disparity size. The results indicate that overestimation of the number of elements in a 3-D stimulus can occur irrespective of whether two stereo-surfaces overlap or not, being inconsistent with the hypothesis.



Influence of disparity and motion cues on the shape perception of transparent objects

Nick Schlüter, Franz Faul

Institut für Psychologie, Christian-Albrechts-Universität zu Kiel, Kiel, Germany

Image regularities that could be used to estimate the shape of transparent objects arise from background distortions due to refraction, changes in chromaticity and intensity due to absorption, and mirror images due to specular reflection. Our previous findings show that although the presence of these regularities can contribute positively to shape perception in certain situations, the shape of transparent objects is judged less accurately than that of opaque ones. Here, we investigate how the overall performance and the contribution of individual shape cues change when information from disparity is removed or when information from dynamics is added. We presented subjects with images of randomly shaped transparent objects and asked them to indicate their local surface orientation (gauge figure task). Our results show that omitting disparity information by using monoscopic stimuli impedes shape perception, but to a much lesser extent than for opaque objects. On the other hand, adding dynamics by oscillating the camera around the object substantially improves the performance, and much more so than in the opaque case. Moreover, the results suggest that this performance increase cannot be attributed solely to the concomitant increase in shape information conveyed by the contour of the object.



Preattentive ensemble-based segmentation of multiple items: evidence from the mismatch negativity

Anton Lukashevich, Maria Servetnik, Igor Utochkin

National Research University Higher School of Economics, Russian Federation

Our visual system is capable of rapid categorization and segmentation of multiple briefly presented items, even when they are spatially intermixed (for example, seeing a set of berries among leaves vs. just leaves of various shades in autumn). We have previously shown that the statistics of feature distribution can be used for rapid categorization, namely whether the distribution has a single or several peaks. If the several peaks are present with large gaps between, the set can be split into relatively independent categorical groups. Here, we tested the automaticity of rapid categorization in an ERP study. We looked at the mismatch negativity (MMN), considered an ERP correlate of automatic processing (Naatanen, 1998). Our observers performed a central task diverting their attention from sequentially presented background textures with different combinations of length and orientation. The oddball event was the change in the sign of length-orientation correlation. We found evidence for an MMN to oddballs in the time window of 150-200 ms. Critically, MMN was the strongest when lengths and orientations had two-peak rather than a smooth uniform distribution, which allows us to consider rapid categorization having an automatic, preattentive component. This work is supported by Russian Science Foundation (project 18-18-00334



Age judgements of faces: evidence for ensemble averaging

Deema Awad1, Colin Clifford2, David White2, Isabelle Mareschal1

1Queen Mary University of London, United Kingdom; 2University of New South Wales, Australia

Previous work has shown that age estimates of faces are biased, with an average estimation error of 8 years. Here, we sought to examine whether the presence of other faces influences age judgments of a target face. To do this, we used a database of standardized passport photos and asked participants (n = 136) to estimate the age of a target face that was viewed on its own or surrounded by two different identity flanker faces. The flanker faces had the same age and differed from the target’s age by ± 15 years. We find that age estimates are systematically biased towards the age of flankers F(2,746) = 27.86, p<0.001. The target face appeared younger when it was flanked by younger faces , and appeared older when flanked by older faces, than when it was viewed alone. These effects were modulated by the stimulus age, with the largest biases occurring when the target face was similar in age to the participants’. We also tested different target:flanker ages and find similar results, although this effect was strongest for flankers differing by ±15 years with the target. These results suggest that age judgments may be subject to ensemble averaging.



Filling-in of two antagonistic features into artificial scotoma by MIB

Hiroki Yokota, Seiichiro Naito

Tokai University, Japan

Purpose: We investigated “Filling-in” at artificially created scotoma (AS) by Motion Induced Blindness (MIB), in which the disappearing areas were surrounded by two antagonistically featured backgrounds. The features were (1)brightness, (2)color, (3)texture, (4)dynamic random dots, (5)afterimage, (6)motion, and (7)depth

Method: Several white test discs were arranged circularly around the fixation point at equi-eccentric loci. For each test disc, MIB inducer was applied one disk or two at a time in turn. The inducer was the expanding rings surrounding the test disc. The observer noticed the disappearance of the test disc, and reported that the two background textures were filled in, as if the subjective border line was extended and crosslinked over the disappeared test disc.

Results: Our “filling-ins” were quite consistent with that of the natural blind spot.

Discussion: The filling-ins of uniform texture such as (1)-(5) above have been studied, however, the filling-ins of two antagonistic features into a common area has not yet been fully investigated. The “filling-ins” of (6) and (7) were novel. Natural blind spot does not have (7).

Conclusions: Our AS was similar to natural blind spot. The relevant nature of the border which instigate the “filling-in” should be highly abstract integrating all of (1)-(7).



Thinking Around the Corner: How to Process Sharp Bends in Contour Integration

Alina Schiffer1, Udo Ernst1, Malte Persike2

1Institute for Theoretical Physics, University of Bremen, Hochschulring 18, 28359 Bremen, Germany; 2Center for Teaching and Learning Services (CLS), Theaterplatz 14, D-52062 Aachen, Germany

Contour Integration (CI) links disjoint image segments to identify the physically contiguous boundaries of global shapes or objects. Usually, contour salience deteriorates down to the point of invisibility when contour curvature increases. However, it was shown recently that the deterioration of contour visibility due to sharp changes in curvature can easily be remedied by inserting corner elements at the points of angular discontinuity (Persike & Meinhardt, 2016, 2017). Hence a question arises: What defines a “proper” corner and how do small changes in the configuration of a corner element influence the visibility of contours?

We designed an experimental series analyzing the effect of corner-like elements placed at points of angular discontinuity consisting of two line segments with varying distances. These distances varied from disconnected line segments ( | _ ) as in classical CI paradigms, over line segments forming corners (∟), to overlapping line segments forming crosses (+). These experiments confirmed the stabilizing effect of corners on contour visibility. They showed that this effect is strongest for proper corners and decreases symmetrically with larger distance into both directions for classic segments as well as crossings. We currently perform model simulations to identify putative integration mechanisms explaining the experimental data.



Interaction of Convexity and Implied-Closure in Figure-Ground Organization

Tandra Ghose, Ananya Mukherjee

Uni Kaiserslautern, Germany

In figure-ground organization, a convex shared-contour implies closure, because extending it on the convex side will lead to the formation of a closed region. We investigated how convexity and implied-closure interact for determining figure-ground organization, in case of conflict. Conflict stimuli were created by manipulating the top and bottom part of the shared-contour that slightly curves around to intersect the perpendicular-borders of the bipartite image. These manipulated-segments were cropped and flipped along an axis parallel to the shared contour, in order to reverse the direction of curvature. In congruent conditions, small segments of the shared contours were flipped close to the middle, so that net convexity is matched to the incongruent condition while convexity and implied-closure remained on the same side of the shared contour.

Stimuli consisted of 256 bipartite black-white images with circular/triangular shared-borders in congruent and incongruent variations. Data from 18 participants show that in case of conflict, convexity has lower influence on figure-ground judgment compared to congruent cases. Data was noisier in conditions where the shared-border was horizontal and participants made up/down “figural” judgments due to the strong interference by “lower region figural” cue.



Link ownership assignment – a psychological factor in knot magic

Vebjørn Ekroll

University of Bergen, Norway

Many magic tricks involving knots and ropes are surprisingly effective, suggesting that unknown psychological factors are involved in creating the experience of magic (i.e. the illusion of impossibility). Here, I describe a novel perceptual principle which seems to bias our conscious reasoning about links between ropes, and thus may explain why many of these tricks are so effective. A link between two loops, such as the one created by two interlinked rings, is a shared property of the two loops. Depending on the current geometrical shape of two linked pieces of rope, however, we tend to perceive the link as belonging to only one of them. This phenomenon of link ownership assignment is reminiscent of the well-known phenomenon of border ownership assignment in figure-ground perception, and can be regarded as an example of Bregman’s principle of exclusive allocation. I illustrate how link ownership assignment is involved in several magic tricks and puzzles and argue that it may be an example of a more broadly applicable psychological principle, according to which the current geometrical shape category of a flexible object is so prominent in our immediate visual imagery that blocks imagery of other possible shape categories.



The illusion of absence in magic tricks

Mats Svalebjørg, Heidi Øhrn, Vebjørn Ekroll

University of Bergen, Norway

A recent analysis of the role of amodal completion in magic revealed a curious illusion of absence, where the space behind an occluder is compellingly experienced as empty. Informal observations suggests that this illusion is similar to illusions based on amodal completion in the sense that it refers to occluded portions of a visual scene and seems to be largely impervious to conscious knowledge. Interestingly, however, this illusion cannot be explained based on extant models of amodal completion, because it does not involve directly visible parts of objects that can be used as a starting point for completion processes. The aim of the present experiment was to test the hypothesis that the illusion of absence is cognitively impenetrable in the same way as amodal completion. Participants viewed magic tricks based on either attentional misdirection, amodal completion or the illusion of absence and tried to infer the secret behind the tricks after one, two or three presentations. The results show that the tricks based on the illusion of absence are very difficult to debunk, even after repeated presentations. In this regard, they are similar to tricks based on amodal completion, but different from tricks based on attentional misdirection.



Occlusion illusion without occlusion

Tom Scherzer

Kiel University, Germany

A partially occluded object is usually perceptually completed to a whole, whereby the added parts have no visual qualities such as brightness or color (“amodal completion”; Michotte, Thinès, & Crabbé, 1964/1991). Interestingly, however, under certain conditions not only an amodal, but also a partial modal completion occurs (“occlusion illusion”; Palmer, Brooks, & Lai, 2007). The data presented here suggest that such a partial modal completion occurs not only with opaque, but also with semi-transparent screens. This would mean that the occlusion illusion cannot be attributed to occlusion itself. Rather, the effect seems to occur when there is clear evidence regarding the shape and quality of the continuation of a visible element. The findings are largely consistent with the theoretical idea that the phenomenal presence of visual qualities, as in the case of modal completion, represents a strong conclusiveness of sensory evidence (“conclusive-sensory-evidence hypothesis”; Scherzer & Faul, in press).



The effect of occlusion on “tool effect”

Hidemi Komatsu

Keio University, Japan

This study investigated the effect of occlusion on “tool effect” (without intermediary object). “Tool effect” is one of the “perceived causality” that had been examined by Michotte (1951). Object A(launcher), I(intermediary), and B(target) were arranged horizontally from the left. These objects began to move in the order of Object A, I, and B. Then Object A was perceived to be the cause of all other objects' motions. Even if Object I was omitted, “tool effect” could be perceived. Some amodal “tool” was perceived to push Object B. As the stopping position of Object A was nearer Object B, the “tool effect” tended to be perceived. In these experiments, by examining the relationship between the stop position of the Object A and the reported rate of causality, the difference of the causal impression under with or without occluding object condition was compared. In without occluding object condition, the rate of “tool effect” had a negative correlation to distance between the stopping position of Object A and the starting position of object B (y=-4.22x+79.57, R2=0.97). However, if the stopping point of Object A was occluded, it was increasing as the distance was longer. The relation of them approximated sigmoid curve (y=-0.90x3+13.05x2+57.03, R2=0.99).



Anticipating object trajectories in the presence of a goal

Andrea Frielink-Loing, Arno Koning, Rob van Lier

Radboud University, Donders Institute for Brain, Cognition and Behaviour

We investigated whether attention allocation during Multiple Object Tracking (MOT) is influenced by the presence of a goal. We used adaptations of MOT paradigms to determine the allocation of attention near tracked objects as they moved towards a goal. In Experiment 1, participants tried to ‘catch’ targets by controlling a goal. A vertical line centred on the screen acted as a wall off which objects bounced. Target bounces triggered the appearance of a probe in either the bounce direction (i.e., real future object location) or linear direction. Participants detected probes better when the target subsequently reached the goal as compared to no goal. In Experiment 2, participants additionally controlled the permeability of the vertical wall, allowing objects to move through or bounce off. Again, probes appeared when targets reached the vertical wall. Two corners of the screen were designated fixed goals, one ‘good’ and one ‘bad’. Only when a target moved towards a ‘good’ goal, probes were detected better when the probe location was in the path towards that goal. The opposite was true for targets moving towards a ‘bad’ goal. We conclude that the presence and valence of a goal influences how attention is allocated during object tracking.



The influence of perspective of an inanimate object on the boundary extension phenomena

Giada Sangiorgi1, Gabriele Pesimena2, Elena Commodari1, Marco Bertamini3, Alessandro Soranzo2

1University of Catania, Italy; 2Sheffield Hallam University; 3University of Liverpool

One of the most compelling phenomena in visual memory is the Boundary Extension (BE) which is the tendency to remember close-up scenes as if they include more information than that was seen. Intraub and Richardson (1989; JEP:LMC), suggested that this phenomenon is due to a filling in process: we fill the scene with information around the boundaries based on our knowledge.

For the BE to occur, the scene must be perceived as part of a continuous environment. This project investigated whether the BE can be implicitly affected by the directional information provided by a camera. In the learning phase of a recognition experiment, participants were presented with an image on a computer screen that could have been cropped either to the left or to the right whilst a camera could have been positioned either to their left or right. In the testing phase, the image was then presented again, and participants were asked to judge if it was the same. Results showed that the BE magnitude reduces when the camera is in the same side of the cropped images. It is concluded that implicit directional cues can affect our ability to visually memorize images.



Сo-representation of the features of objects in the processes of perception and assessment of the chances of joint events

Sergei L. Artemenkov

Moscow State University of Psychology and Education (MSUPE), Russian Federation

Perceptual processes include co­existence of different alternatives providing the flexibility needed by multifunctional perceptual and cognitive system. According to the perceptual reality for any object it is more reliable to have many defined and related features than just one feature. Thus, perceptual processes (unlike thinking processes) show that an object with many simultaneous and related features that belong to it is in fact more valid and actual than an abstract object with just a few random features. This assumes that the nature of perceptual cognition is complex and quite different from common probability logic of joint independent events considered by probability theory. The transcendental psychology approach to perception makes it possible to substantiate co-representative mathematical probability model, which is compliant with human perceptual psychology and heuristic judgment under uncertainty. This model establishes other rules for combining probabilities and shows that the probability of joint events may exceed the probability of any of the events separately. Cross-cultural experiment on the perception of the likelihood of joint events showed the possibility of influencing a person’s decision making in a predictable direction, by varying perceived and semantic situational parameters in accordance with the theoretical assumptions associated with the new model for estimating probability.



Phenomenal Causality and Sensory Realism

Sebastian Alexander Bruijns1, Kristof Meding1,2, Bernhard Schölkopf2, Felix Wichmann1

1Universität Tübingen, Germany; 2Max-Planck-Institute for Intelligent Systems, Germany

One of the most important tasks for humans is the attribution of causes and effects---in diverse contexts, including visual perception.
Albert Michotte was one of the first to systematically study causal visual perception using his now well-known launching event paradigm. Launching events are the collision and transfer of movement between two objects---featureless disks in the original experiments. The perceptual simplicity of the original displays allows for insight into the basics of the mechanisms governing causal perception.
We wanted to study the relation between causal ratings for launching in the usual abstract setting and launching collisions in a photo-realistic setting. For this purpose we presented typical launching events with differing temporal gaps, as well as the same launching processes with photo-realistic billiard balls, and also photo-realistic billiard balls with realistic physics, i.e. an initial rebound of the first ball after collision and a short sliding phase of the second ball. We found that simply giving the normal launching stimulus realistic visuals lead to lower causal ratings, but realistic visuals together with realistic physics evoked higher ratings. We discuss this initially perhaps counter-intuitive result in terms of cue conflict and the seemingly detailed (implicit) physical knowledge embodied in our visual system.



Perception of temporal dependencies in autoregressive motion

Kristof Meding1,2, Bernhard Schölkopf2, Felix A. Wichmann1

1Neural Information Processing Group, University of Tübingen, Tübingen, Germany; 2Max Planck Institute for Intelligent Systems. Department of Empirical Inference, Tübingen, Germany

Understanding the principles of causal inference in the visual system has a long history, certainly since the seminal studies by Michotte. During the last decade, a new type of causal inference algorithms has been developed in statistics. These algorithms use the dependence structure of residuals in a fitted additive noise framework to detect the direction of causal information from data alone (Peters et al., 2008).
In this work we investigate whether the human visual system may employ similar causal inference algorithms when processing visual motion patterns, focusing, as in the theory, on the arrow-of-time. Our data suggest that human observers can discriminate forward and backward played movies of autoregressive (AR) motion with non-Gaussian additive independent noise, i.e., they appear sensitive to the subtle temporal dependencies of the AR-motion, analogous to the high sensitivity of human vision to spatial dependencies in natural images (Gerhard et al., 2013). Intriguingly, a comparison to known causal inference algorithms suggests that humans employ a different strategy.
The results demonstrate that humans can use spatiotemporal motion patterns in causal inference tasks. This finding raises the question of whether the visual system is tuned to motion in an additive noise framework.



The comparison of the effectiveness of learning using virtual reality and traditional educational methods

Artem Kovalev, Julia Rogoleva

Lomonosov Moscow State University, Russian Federation

Virtual reality technologies (VR) allow users to get a lot of visual information in a short time and it is an important feature to use VR in education. This study was aimed to evaluate the effectiveness of VR in learning of previously unknown information. The participants (29 students: 22 females, 7 males) received three types of stimuli: text, 2D video and VR. The efficiency of learning was tested with questions before and after each experiment. VR stimuli were presented by Samsung Gear VR. The results showed that the number of correct answers significantly changed from the baseline to the test after the learning session only in “text” (t = 4,4, р <0,001) and “VR” (t = 3,7, р <0,001) conditions. The number of correct answers increased and significantly differed between “VR” and “2D” (t = 0,398 р <0,001), “text” and “2D” (t = 0,29 р < 0,001). Thus it was obtained that text and VR were more efficient for studying than 2D video. We can assert that VR offers an effective method to improve the process of learning but the traditional teaching methods keep playing the important role in education. The research was supported by grant from RFBR №18-29-22049.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: ECVP 2019
Conference Software - ConfTool Pro 2.6.128+TC
© 2001 - 2019 by Dr. H. Weinreich, Hamburg, Germany