全文获取类型
收费全文 | 1127篇 |
免费 | 10篇 |
专业分类
1137篇 |
出版年
2024年 | 7篇 |
2023年 | 7篇 |
2022年 | 9篇 |
2021年 | 34篇 |
2020年 | 37篇 |
2019年 | 35篇 |
2018年 | 24篇 |
2017年 | 55篇 |
2016年 | 54篇 |
2015年 | 44篇 |
2014年 | 59篇 |
2013年 | 332篇 |
2012年 | 30篇 |
2011年 | 82篇 |
2010年 | 36篇 |
2009年 | 61篇 |
2008年 | 57篇 |
2007年 | 39篇 |
2006年 | 26篇 |
2005年 | 18篇 |
2004年 | 32篇 |
2003年 | 19篇 |
2002年 | 14篇 |
2001年 | 5篇 |
2000年 | 4篇 |
1999年 | 6篇 |
1998年 | 3篇 |
1995年 | 2篇 |
1993年 | 1篇 |
1991年 | 1篇 |
1982年 | 1篇 |
1980年 | 1篇 |
1979年 | 1篇 |
1978年 | 1篇 |
排序方式: 共有1137条查询结果,搜索用时 15 毫秒
91.
The processing of visual and vestibular information is crucial for perceiving self-motion. Visual cues, such as optic flow, have been shown to induce and alter vestibular percepts, yet the role of vestibular information in shaping visual awareness remains unclear. Here we investigated if vestibular signals influence the access to awareness of invisible visual signals. Using natural vestibular stimulation (passive yaw rotations) on a vestibular self-motion platform, and optic flow masked through continuous flash suppression (CFS) we tested if congruent visual–vestibular information would break interocular suppression more rapidly than incongruent information. We found that when the unseen optic flow was congruent with the vestibular signals perceptual suppression as quantified with the CFS paradigm was broken more rapidly than when it was incongruent. We argue that vestibular signals impact the formation of visual awareness through enhanced access to awareness for congruent multisensory stimulation. 相似文献
92.
Research in cycling safety seeks to better understand bicycle-related crashes and injuries. The present naturalistic cycling study contributes to this research by collecting data about bicyclists’ behavior and impressions of safety–critical situations, information unavailable in traditional data sources (e.g., accident databases, observational studies). Naturalistic data were collected from 16 bicyclists (8 female; M = 39.1 years, SD = 11.4 years) who rode instrumented bicycles for two weeks. Bicyclists were instructed to report all episodes in which they felt uncomfortable while riding (subjective risk perception), even if they didn’t fall. After data collection, the bicyclists were interviewed in detail regarding their self-reported safety–critical events. Environmental conditions were also recorded via video (e.g., road surface, weather). In total, 63 safety–critical events (56 non-crashes, 7 crashes) were reported by the bicyclists, mainly due to interactions with other road users – but also due to poorly maintained infrastructure. In low-visibility conditions, vehicle-bicycle and bicycle-bicycle events were the most uncomfortable for the bicyclists. Self-reported pedestrian–bicycle events primarily consisted of pedestrians starting to cross the bicycle path without looking. With one exception, all crashes found in the study belonged to poorly maintained road and infrastructure. In particular, construction work or obstacles in the bicycle path were reported as uncomfortable and annoying by the bicyclists. This study shows how naturalistic data and bicyclists’ interviews together can provide a more informative picture of safety–critical situations experienced by the bicyclist than traditional data sources can. 相似文献
93.
Research has shown that attentional pre-cues can subsequently influence the transfer of information into visual short term memory (VSTM) (Schmidt, B., Vogel, E., Woodman, G., & Luck, S. (2002). Voluntary and automatic attentional control of visual working memory. Perception & Psychophysics, 64(5), 754–763). However, studies also suggest that those effects are constrained by the hemifield alignment of the pre-cues (Holt, J. L., & Delvenne, J.-F. (2014). A bilateral advantage in controlling access to visual short-term memory. Experimental Psychology, 61(2), 127–133), revealing better recall when distributed across hemifields relative to within a single hemifield (otherwise known as a bilateral field advantage). By manipulating the duration of the retention interval in a colour change detection task (1 s, 3 s), we investigated whether selective pre-cues can also influence how information is later maintained in VSTM. The results revealed that the pre-cues influenced the maintenance of the colours in VSTM, promoting consistent performance across retention intervals (Experiments 1 & 4). However, those effects were only shown when the pre-cues were directed to stimuli displayed across hemifields relative to stimuli within a single hemifield. Importantly, the results were not replicated when participants were required to memorise colours (Experiment 2) or locations (Experiment 3) in the absence of spatial pre-cues. Those findings strongly suggest that attentional pre-cues have a strong influence on both the transfer of information in VSTM and its subsequent maintenance, allowing bilateral items to better survive decay. 相似文献
94.
Ruud Koolen;Emiel Krahmer; 《Cognitive Science》2024,48(6):e13473
Experiments on visually grounded, definite reference production often manipulate simple visual scenes in the form of grids filled with objects, for example, to test how speakers are affected by the number of objects that are visible. Regarding the latter, it was found that speech onset times increase along with domain size, at least when speakers refer to nonsalient target objects that do not pop out of the visual domain. This finding suggests that even in the case of many distractors, speakers perform object-by-object scans of the visual scene. The current study investigates whether this systematic processing strategy can be explained by the simplified nature of the scenes that were used, and if different strategies can be identified for photo-realistic visual scenes. In doing so, we conducted a preregistered experiment that manipulated domain size and saturation; replicated the measures of speech onset times; and recorded eye movements to measure speakers’ viewing strategies more directly. Using controlled photo-realistic scenes, we find (1) that speech onset times increase linearly as more distractors are present; (2) that larger domains elicit relatively fewer fixation switches back and forth between the target and its distractors, mainly before speech onset; and (3) that speakers fixate the target relatively less often in larger domains, mainly after speech onset. We conclude that careful object-by-object scans remain the dominant strategy in our photo-realistic scenes, to a limited extent combined with low-level saliency mechanisms. A relevant direction for future research would be to employ less controlled photo-realistic stimuli that do allow for interpretation based on context. 相似文献
95.
Weiyan Liao;Janet Hui-wen Hsiao; 《Cognitive Science》2024,48(9):e13489
In isolated English word reading, readers have the optimal performance when their initial eye fixation is directed to the area between the beginning and word center, that is, the optimal viewing position (OVP). Thus, how well readers voluntarily direct eye gaze to this OVP during isolated word reading may be associated with reading performance. Using Eye Movement analysis with Hidden Markov Models, we discovered two representative eye movement patterns during lexical decisions through clustering, which focused at the OVP and the word center, respectively. Higher eye movement similarity to the OVP-focusing pattern predicted faster lexical decision time in addition to cognitive abilities and lexical knowledge. However, the OVP-focusing pattern was associated with longer isolated single letter naming time, suggesting conflicting visual abilities required for identifying isolated letters and multi-letter words. In contrast, in both word and pseudoword naming, although clustering did not reveal an OVP-focused pattern, higher consistency of the first fixation as measured in entropy predicted faster naming time in addition to cognitive abilities and lexical knowledge. Thus, developing a consistent eye movement pattern focusing on the OVP is essential for word orthographic processing and reading fluency. This finding has important implications for interventions for reading difficulties. 相似文献
96.
The degree to which each sport modality relies on cognitive visual skills is hitherto under-researched. This study sought to further understanding of the relationship between sport modality and visual search ability, visual working memory, and reasoning. Ninety-five participants took part in the present study. In order to assess visual search ability, we employed the Visual Search Task. Visual working memory was assessed through the Corsi Block Tapping – Backwards Task. Reasoning abilities were assessed through the Cognitive Reflection Task. Results indicate that visual search skills appear to benefit to a higher extent from open-skill sports when compared to closed-skill sports. It is important to emphasize, however, that this result was associated with a small effect size. Moreover, the present findings indicate that closed-skill athletes do not differ in terms of visual search abilities, working memory, and reasoning abilities when compared to control individuals. 相似文献
97.
The visual behaviors and movement characteristics of pedestrians are related to their surrounding potential safety hazards, such as approaching vehicles. This study primarily aimed to investigate the visual patterns and walking behaviors of pedestrians interacting with approaching vehicles. Field experiments were conducted at two uncontrolled crosswalks located at the Cuihua and Yanta roads in Xi’an, China. The visual performance of pedestrians was assessed using the eye tracking system from SensoMotoric Instruments (SMI). Moreover, motion trajectories of the pedestrians and approaching vehicles were obtained using an unmanned aerial vehicle. Subsequently, the visual attributes and movement trajectories of pedestrians and motion trajectories of approaching vehicles were statistically analyzed. The results showed that approaching vehicles distracted the fixation of crossing pedestrians significantly, and occupied 29.5% of the total duration of fixation; that is, pedestrians always directed more fixation points to the approaching vehicles compared to other stimuli. As a vehicle approached, pedestrians’ fixation shifted from other areas of interest to the vehicle. Moreover, an increase in the velocity of the vehicle and a closer distance between pedestrian and the vehicle resulted in an increase in the pedestrians’ duration of fixation on the approaching vehicle, and they implemented more saccades. Furthermore, approaching vehicle’s velocity and distance between pedestrian and approaching vehicle are not significantly associated with pedestrian’s movement attributes. These findings provide insights into the crossing behavior of pedestrians during pedestrian-vehicle interactions, which could assist future researchers and policy makers. 相似文献
98.
A substantial body of research has examined the speed-accuracy tradeoff captured by Fitts’ law, demonstrating increases in movement time that occur as aiming tasks are made more difficult by decreasing target width and/or increasing the distance between targets. Yet, serial aiming movements guided by internal spatial representations, rather than by visual views of targets have not been examined in this manner, and the value of confirmatory feedback via different sensory modalities within this paradigm is unknown. Here we examined goal-directed serial aiming movements (tapping back and forth between two targets), wherein targets were visually unavailable during the task. However, confirmatory feedback (auditory, haptic, visual, and bimodal combinations of each) was delivered upon each target acquisition, in a counterbalanced, within-subjects design. Each participant performed the aiming task with their pointer finger, represented within an immersive virtual environment as a 1 cm white sphere, while wearing a head-mounted display. Despite visual target occlusion, movement times increased in accordance with Fitts’ law. Though Fitts’ law captured performance for each of the sensory feedback conditions, the slopes differed. The effect of increasing difficulty on movement times was least influential in the haptic condition, suggesting more efficient processing of confirmatory haptic feedback during aiming movements guided by internal spatial representations. 相似文献
99.
We study how people attend to and memorize endings of events that differ in the degree to which objects in them are affected by an action: Resultative events show objects that undergo a visually salient change in state during the course of the event (peeling a potato), and non‐resultative events involve objects that undergo no, or only partial state change (stirring in a pan). We investigate general cognitive principles, and potential language‐specific influences, in verbal and nonverbal event encoding and memory, across two experiments with Dutch and Estonian participants. Estonian marks a viewer's perspective on an event's result obligatorily via grammatical case on direct object nouns: Objects undergoing a partial/full change in state in an event are marked with partitive/accusative case, respectively. Therefore, we hypothesized increased saliency of object states and event results in Estonian speakers, as compared to speakers of Dutch. Findings show (a) a general cognitive principle of attending carefully to endings of resultative events, implying cognitive saliency of object states in event processing; (b) a language‐specific boost on attention and memory of event results under verbal task demands in Estonian speakers. Results are discussed in relation to theories of event cognition, linguistic relativity, and thinking for speaking. 相似文献
100.
Visual representations are prevalent in STEM instruction. To benefit from visuals, students need representational competencies that enable them to see meaningful information. Most research has focused on explicit conceptual representational competencies, but implicit perceptual competencies might also allow students to efficiently see meaningful information in visuals. Most common methods to assess students’ representational competencies rely on verbal explanations or assume explicit attention. However, because perceptual competencies are implicit and not necessarily verbally accessible, these methods are ill‐equipped to assess them. We address these shortcomings with a method that draws on similarity learning, a machine learning technique that detects visual features that account for participants’ responses to triplet comparisons of visuals. In Experiment 1, 614 chemistry students judged the similarity of Lewis structures and in Experiment 2, 489 students judged the similarity of ball‐and‐stick models. Our results showed that our method can detect visual features that drive students’ perception and suggested that students’ conceptual knowledge about molecules informed perceptual competencies through top‐down processes. Furthermore, Experiment 2 tested whether we can improve the efficiency of the method with active sampling. Results showed that random sampling yielded higher accuracy than active sampling for small sample sizes. Together, the experiments provide the first method to assess students’ perceptual competencies implicitly, without requiring verbalization or assuming explicit visual attention. These findings have implications for the design of instructional interventions that help students acquire perceptual representational competencies. 相似文献