首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Farmers have local knowledge relevant to the prospective assessment of biopharming—the farming of transgenic plants and animals genetically modified to produce pharmaceutical substances for use in humans or animals. However, biopharming regulatory regimes are being constructed in ways that render farmers' knowledge irrelevant. The exclusion of farmer knowledge is traceable to what we call its politico-epistemic unworkability as regulatory knowledge in regulatory regimes characterised not only by a focus on risk but also by pre-commitments to market-driven innovation and individual freedom of choice. Such innovation requires predictability in the regulatory environment, which is supplied in part through regulatory authorities' adoption of risk-assessment methodologies that both assume predictability (of natural and social worlds) and generate predictable decision outcomes. These regulatory approaches are co-constituted with scientific knowledge of gene flow developed through studies that can demonstrate a reliable and readily modellable decline of undesirable effect across space or time and therefore make possible the setting of clear and workable parameters for risk management. The farmers' knowledge discussed here highlights the extent to which this is a process of co-production. Moreover, it suggests that the claimed economic advantages of outdoor biopharming, achieved through the substitution of agronomic practice for laboratory infrastructure, depend both on natural processes that always also threaten to undermine confinement and on the habitus of successful farmers, which may in fact be incompatible with the kind of risk management that biopharming requires.  相似文献   

2.
The capacity to collect and analyse data is growing exponentially. Referred to as ‘Big Data’, this scientific, social and technological trend has helped create destabilising amounts of information, which can challenge accepted social and ethical norms. Big Data remains a fuzzy idea, emerging across social, scientific, and business contexts sometimes seemingly related only by the gigantic size of the datasets being considered. As is often the case with the cutting edge of scientific and technological progress, understanding of the ethical implications of Big Data lags behind. In order to bridge such a gap, this article systematically and comprehensively analyses academic literature concerning the ethical implications of Big Data, providing a watershed for future ethical investigations and regulations. Particular attention is paid to biomedical Big Data due to the inherent sensitivity of medical information. By means of a meta-analysis of the literature, a thematic narrative is provided to guide ethicists, data scientists, regulators and other stakeholders through what is already known or hypothesised about the ethical risks of this emerging and innovative phenomenon. Five key areas of concern are identified: (1) informed consent, (2) privacy (including anonymisation and data protection), (3) ownership, (4) epistemology and objectivity, and (5) ‘Big Data Divides’ created between those who have or lack the necessary resources to analyse increasingly large datasets. Critical gaps in the treatment of these themes are identified with suggestions for future research. Six additional areas of concern are then suggested which, although related have not yet attracted extensive debate in the existing literature. It is argued that they will require much closer scrutiny in the immediate future: (6) the dangers of ignoring group-level ethical harms; (7) the importance of epistemology in assessing the ethics of Big Data; (8) the changing nature of fiduciary relationships that become increasingly data saturated; (9) the need to distinguish between ‘academic’ and ‘commercial’ Big Data practices in terms of potential harm to data subjects; (10) future problems with ownership of intellectual property generated from analysis of aggregated datasets; and (11) the difficulty of providing meaningful access rights to individual data subjects that lack necessary resources. Considered together, these eleven themes provide a thorough critical framework to guide ethical assessment and governance of emerging Big Data practices.  相似文献   

3.
4.
This paper reports data and scholarly opinion that support the perception of systemic flaws in the management of scientific professions and the research enterprise; explores the responsibility that professional status places on the scientific professions, and elaborates the concept of the responsible conduct of research (RCR). Data are presented on research misconduct, availability of research guidelines, and perceived research quality. An earlier version of this paper was presented at an International Conference on “Conflict of Interest and its Significance in Science and Medicine” held in Warsaw, Poland, 5–6 April, 2002. The opinions expressed herein are those of the author and do not necessarily represent the views of the Office of Research Integrity, the U.S. Department of Health and Human Services, or any other federal agency.  相似文献   

5.
Regulations recently enacted by the Public Health Service and the National Science Foundation to address misconduct in scientific research were designed primarily to curtail deliberate forms of misconduct, such as fabrication or falsification of findings; however, researchers may also be held accountable for inadvertent deficiencies in data management. This article examines some of the problems in data quality control, documentation, and data retention that can occur when computers are used in scientific research. It focuses on deficiencies that could make it difficult to verify the integrity of research data or to reproduce statistical analyses. Strategies for prevention of data management problems are recommended.  相似文献   

6.
Scientific misconduct includes the fabrication, falsification, and plagiarism (FFP) of concepts, data or ideas; some institutions in the United States have expanded this concept to include “other serious deviations (OSD) from accepted research practice.” It is the absence of this OSD clause that distinguishes scientific misconduct policies of the past from the “research misconduct” policies that should be the basis of future federal policy in this area. This paper introduces a standard for judging whether an action should be considered research misconduct as distinguished from scientific misconduct: by this standard, research misconduct must involve activities unique to the practice of science and must have the potential to negatively affect the scientific record. Although the number of cases of scientific misconduct is uncertain (only the NIH and the NSF keep formal records), the costs are high in terms of the integrity of the scientific record, diversions from research to investigate allegations, ruined careers of those eventually exonerated, and erosion of public confidence in science. Existing scientific misconduct policies vary from institution to institution and from government agency to government agency; some have highly developed guidelines that include OSD, others have no guidelines at all. One result has been that the federal False Claims Act has been used to pursue allegations of scientific misconduct. As a consequence, such allegations have been adjudicated in federal courts, rather than judged by scientific peers. The federal government is now establishing a first-ever research misconduct policy that would apply to all research funded by the federal government regardless of which agency funded the research or whether the research was carried out in a government, industrial or university laboratory. Physical scientists, who up to now have only infrequently been the subject of scientific misconduct allegations, must nonetheless become active in the debate over research misconduct policies and how they are implemented since they will now be explicitly covered by this new federal wide policy. Disclaimer: The authors are grateful for the support for conduct of this research provided by the United States Department of Energy (DOE). The views expressed in this paper are solely those of the authors and were formed and expressed without reference to positions taken by DOE or the Pacific Northwest National Laboratory (PNNL). The views of the authors are not intended either to reflect or imply positions of DOE or PNNL.  相似文献   

7.
Abstract

The concepts and methods used by regulatory agencies worldwide to assess the safety of flavouring additives were designed by and for the flavouring industry. They embody and embed, in routine regulatory practice, the industry's commercial interests in minimising regulatory costs and the risk that the market for its products might be restricted. First sketched out by US flavouring company scientists in the early 1960s, this approach required almost no experimental data, and was highly permissive, relative to both our knowledge (and lack of it) about chemical toxicity and the ways other kinds of food additives are regulated. A ‘realist constructivist’ analysis illustrates how the industry's approach was also anti-scientific and unscientific because it served to discourage scientific investigation of important aspects of the phenomena it purported to evaluate, and because it relied on assumptions and hypotheses that lacked any evidential basis. The industry approach was first used to assess flavourings in the USA, where the industry was allowed to design and run its own regulatory regime. In all other regulatory jurisdictions, the industry's approach was rejected; expert advisors argued that it was incompatible with mandates to protect consumer health. Yet, the approach eventually prevailed everywhere. It did so in large part because of the collective refusal of the flavouring industry over three decades to provide the experimental data that had been requested by the regulatory authorities. This has been a form of regulatory capture, which was triggered by a remarkably effective tactic of non-cooperation with demands for data.  相似文献   

8.
This study investigates the relationship of formal mentoring program design elements (i.e., voluntary participation, input to matching, and effectiveness of training) and management support to the benefits and costs perceived by formal mentors. Data were collected from 97 formal mentors from a Midwestern financial institution. Multiple regressions were performed controlling for time as a mentor in the program, hours spent mentoring, and number of protégés. Voluntary mentor participation was positively related to perceiving rewarding experiences and negatively related to being more trouble than it was worth. Input to the matching process was negatively related to nepotism, and perceptions of training effectiveness were positively related to generativity. Finally, perceived management support for the program was positively related to rewarding experience and recognition, and negatively related to generativity and bad reflection. Three supplemental group interviews were conducted to further explore some of the survey findings. Directions for future research and implications for formal workplace mentoring programs as well as mentoring programs in cross-disciplinary contexts are discussed.  相似文献   

9.
Electronic data archives may supplement the traditional peer-reviewed journal article. The merits of data archiving include public service, a more complete research project, overcoming barriers to limited-access research resources, and increasing the impact of a scientific project. A case study of chimpanzee timing performance in space is derived from the NASA Life Sciences Data Archive (http://lsdajsc. nasagov). An analysis of the archived data suggests that the scalar property (a form of Weber's law) applies to timed performance of a chimpanzee in orbit of the earth. Challenges associated with data archives are discussed. Although significant challenges are associated with archiving electronic data, these difficulties are outweighed by its merits.  相似文献   

10.
The validity of what has been termed “scientific” or “systematic” jury selection (SJS) techniques is addressed using data from two actual cases; one criminal and one civil. Data from the highly publicized Joan Little trial indicated that where validity data were available for the survey approach and in-court rating of authoritarianism, these techniques measured what they purported to measure. Validation data were not available for a third technique—in-court rating of nonverbal communication. Data from the civil case indicated that the survey approach could successfully predict verdicts of mock jurors. It is concluded that while these data are suggestive of the validity of two of the techniques used in SJS, more rigorous tests are essential before conclusions can be drawn.  相似文献   

11.
The study examined the impact of changes in the work environment on the construction of place-identity among university academics. Data were collected from five academics at a large distance learning university in South Africa. The institution was undergoing major structural changes at the time of the study. Unstructured questions were used for the data collection. These data were analysed using content analysis and the results suggested academics construct identities towards their place of work and changes to this place may be perceived as a threat.  相似文献   

12.
Maintained Individual Data Distributed Likelihood Estimation (MIDDLE) is a novel paradigm for research in the behavioral, social, and health sciences. The MIDDLE approach is based on the seemingly impossible idea that data can be privately maintained by participants and never revealed to researchers, while still enabling statistical models to be fit and scientific hypotheses tested. MIDDLE rests on the assumption that participant data should belong to, be controlled by, and remain in the possession of the participants themselves. Distributed likelihood estimation refers to fitting statistical models by sending an objective function and vector of parameters to each participant’s personal device (e.g., smartphone, tablet, computer), where the likelihood of that individual’s data is calculated locally. Only the likelihood value is returned to the central optimizer. The optimizer aggregates likelihood values from responding participants and chooses new vectors of parameters until the model converges. A MIDDLE study provides significantly greater privacy for participants, automatic management of opt-in and opt-out consent, lower cost for the researcher and funding institute, and faster determination of results. Furthermore, if a participant opts into several studies simultaneously and opts into data sharing, these studies automatically have access to individual-level longitudinal data linked across all studies.  相似文献   

13.
Electronic data archives may supplement the traditional peer-reviewed journal article. The merits of data archiving include public service, a more complete research project, overcoming barriers to limited-access research resources, and increasing the impact of a scientific project. A case study of chimpanzee timing performance in space is derived from the NASA Life Sciences Data Archive (http://lsda.jsc. nasa.gov). An analysis of the archived data suggests that the scalar property (a form of Weber’s law) applies to timed performance of a chimpanzee in orbit of the earth. Challenges associated with data archives are discussed. Although significant challenges are associated with archiving electronic data, these difficulties are outweighed by its merits.  相似文献   

14.
Fifty-one practising scientists made Q-sorts of 90 statements relating to scientific fraud and impropriety. Principal components analysis identified two major groups. Members of the first group (N=18) seemed to support the standard, or received, view about the nature of science and to interpret scientific fraud and impropriety in terms of the individual shortcomings of deviant scientists. Members of the second group (N=7) seemed to adopt a more critical position about the nature of science and were more likely to construe scientific fraud and impropriety as anticipated aspects of the operation of a human social institution. Some implications of these findings for an understanding of the current debate on scientific fraud and impropriety are considered.  相似文献   

15.
The results of an empirical study of the use of evaluation data in community mental health centers are reported. A mailed survey on evaluation use was conducted among the directors of 164 community mental health centers in 19 states; 140 completed questionnaires were returned. Results indicate that certain types of data have important impacts in a majority of centers. Systems resources management data were most highly used, followed by need assessment data, and client utilization data. Least used were data on outcomes of intervention and community impact. Data use appears closely tied to the utility of the data in carrying out priority management tasks in a center. Findings have important implications for community psychologists who plan, administer, or evaluate mental health services. The broader role of evaluation in community psychology is also discussed.  相似文献   

16.
Gestures are pervasive in human communication across cultures; they clearly constitute an embodied aspect of cognition. In this study, evidence is provided for the contention that gestures are not only a co-expression of meaning in a different modality but also constitute an important stepping stone in the evolution of discourse. Data are provided from a Grade 10 physics course where students learned about electrostatics by doing investigations for which they constructed explanations. The data show that iconic gestures (i.e. symbolic hand movements) arise from the manipulation of objects (ergotic hand movements) and sensing activity (epistemic hand movements). Gestures not only precede but also support the emergence of scientific language. School science classes turn out to be ideal laboratories for studying the evolution of domain ontologies and (scientific) language. Micro-analytic studies of gesture–speech relationships and their emergence can therefore serve as positive constraints and test beds for synthetic models of language emergence.  相似文献   

17.
The study explored the academic patterns and implications of academic attributions made by students who had been given test feedback at a higher learning institution in Zimbabwe. A random sample of 8 (female = 4; male= 4; mean age = 21.9, passed a test = 4; failed the test = 4) participants was purposefully selected from a class of second year students majoring in psychology and human resources management. Audio-taped semi-structured interviews were conducted to collect data. Thematic content analysis was used to analyse data. Results indicate that culture and gender moderate academic causal attributions.  相似文献   

18.
There exist a good many scientific studies into risk-taking and risky behavior displayed by reckless drivers; however, there are only a few studies into attitudes towards the traffic displayed by candidate drivers. The present study aims to investigate the dimensionality of candidate attitudes’ drivers. Data were collected from questionnaire completed by 258 candidate drivers, and were divided into two sets. The first data was used to explore the underlying factor structure and five latent factors were derived; Factor A Attitude towards drinking and driving; Factor P positive attitude toward traffic; Factor S speeding; Factor T traffic flow vs. rule obedience; and Factor R risky candidate driver’s attitude. The second data set was used to confirm this factorial structure using confirmatory factor analysis. The fit indices showed that the model fitted the data well.  相似文献   

19.
What Do the Data Tell Us? Justification of scientific theories is a three-place relation between data, theories, and background knowledge. Though this should be a commonplace, many methodologies in science neglect it. The article will elucidate the significance and function of our background knowledge in epistemic justification and their consequences for different scientific methodologies. It is argued that there is no simple and at the same time acceptable statistical algorithm that justifies a given theory merely on the basis of certain data. And even if we think to know the probability of a theory, that does not decide whether we should accept it or not.  相似文献   

20.
Forensic patients with intellectual disabilities have so far received little attention which is reflected in the comparatively briefly written chapters in the standard textbooks and also the low scientific interest in this patient group. There are only few therapeutic concepts and even less information on their effectiveness. This article presents the Christophorus Clinic in Münster which was the first forensic institution in Germany to specialize in these patients. The institution incorporates 54 treatment places and started operating on 3 June 2011. In addition to the known fact that a therapy concept must (further) develop over the years, during the first year of operation some aspects have arisen which have cristallized as problem areas specific for this patient group, which are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号