In the second stage, we use the output of the hedge classiﬁer to generate sentence level features based on the number of hedge cues, the identity of hedge cues, and a Bag-of-Words feature vector to train a logistic regression classiﬁer for sentence level uncertainty.
Results show that the syntactic context of the tokens in conjunction with the wordlist-based features turned out to be useful in predicting uncertainty cues.
Children with autism are only impaired on social perception tasks when there is more than one cue, suggesting that their impairment on orienting, disengaging and selecting targets for attention underlies the general social deficits (Gillberg 1999).
By contrast, a disruption in the perception-action link in psychopathic or sociopathic individuals (the terms have been used interchangeably) would account for the characteristic lack of normal autonomic responses to the distress cues of another, the social isolation, and the apparent disregard for the emotional and physical state of others (Aniskiewicz 1979; Blair et al.
Representation as a Common DenominatorAs mentioned in the introduction, the most robust effects in empathy experiments can broadly be categorized as effects of familiarity/similarity, past experience, learning (explicit and implicit) and cue salience.
We treat the detection of sentences containing uncertain information (Task1) as a token classiﬁcation task since the existence or absence of cues determines the sentence label.
Action is a means of acquiring perceptual information about theenvironment. Turning around, for example, alters your spatialrelations to surrounding objects and, hence, which of their propertiesyou visually perceive. Moving your hand over an object’s surfaceenables you to feel its shape, temperature, and texture. Sniffing andwalking around a room enables you to track down the source of an unpleasantsmell. Active or passive movements of the body can also generateuseful sources of perceptual information (Gibson 1966, 1979). Thepattern of optic flow in the retinal image produced by forwardlocomotion, for example, contains information about the direction inwhich you are heading, while motion parallax is a “cue”used by the visual system to estimate the relative distances ofobjects in your field of view. In these uncontroversial ways andothers, perception is instrumentally dependent on action. According toan explanatory framework that Susan Hurley(1998) dubs the“Input-Output Picture”, the dependence of perception onaction is purely instrumental:
In order to identify the scope of each cue (Task2), we learn a classiﬁer that predicts whether each token of a sentence belongs to the scope of a given cue.
On the biomedical corpus, our methods achieved F-measure with 77.86% in detecting in-domain uncertain sentences, 77.44% in recognizing hedge cues, and 19.27% in identifying the scopes.
Second, a set of manually crafted rules, operating on dependency representations and the output of the classiﬁer, is applied to resolve the scope of the hedge cues within the sentence.
Unlike popular statistical optimization techniques, the learner uses structural information of the input syllables rather than distributional cues to segment words.
In our participation, we sought to assess the extensibility and portability of our prior work, which relies on linguistic categorization and weighting of hedging cues and on syntactic patterns in which these cues play a role.
We model the in-sentence uncertainty cue and scope detection task as an L2-regularised approximate maximum margin sequence labelling problem, using the BIO-encoding.