Risk

Our measures of overall risk, political risk, and non-political risk rely on word counts that condition on proximity to the use of synonyms for "risk" or "uncertainty".

Overall Risk

Our measure of overall firm-level risk simply counts the frequency of mentions of synonyms for risk or uncertainty and divides by the length of the transcript.


Reference & details

Political Risk

We construct our measure of political risk by first defining a training library of political text, archetypical of the discussion of politics, ℙ, and another training library of non-political text, archetypical of the discussion of non-political topics, ℕ. Each training library is the set of all adjacent two-word combinations ("bigrams") contained in the respective political and non-political texts. We then similarly decompose each conference-call transcript of firm i in quarter t into a list of bigrams contained in the transcript b = 1,…,Bit . We then count the number of occurrences of bigrams indicating discussion of a given political topic within the set of 10 words surrounding a synonym for "risk'' or "uncertainty'' on either side, and divide by the total number of bigrams in the transcript:

where 1[] is the indicator function, ℙ \ ℕ is the set of bigrams contained in ℙ but not ℕ, and r is the position of the nearest synonym of risk or uncertainty.

The first two terms in the numerator thus simply count the number of bigrams associated with discussion of political but not non-political topics that occur in proximity to a synonym for risk or uncertainty (within 10 words). The third term also weights each bigram with a score that reflects how strongly the bigram is associated with the discussion of political topics (the third term in the numerator), where fb,ℙ is the frequency of bigram b in the political training library and Bis the total number of bigrams in the political training library.


Reference & details

Topic-Specific Political Risk

Our topic-specific measures identify risks associated with specific political topics, rather than politics in general. To this end, we use a set of training libraries ℤ ={ℙ1,...,ℙZ }, each containing the complete set of bigrams occurring in one of Z = 8 texts archetypical of discussion of a particular political topic:

“economic policy & budget,” “environment,” “trade,” “institutions & political process,”

“health care,” “security & defense,” “tax policy,” and “technology & infrastructure.

As before, we then calculate the share of the conversation that centers on risks associated with political topic T as the weighted number of bigrams occurring in ℙT but not the non-political library, ℕ, that are used in conjunction with a discussion of political risk:

where p is the position of the nearest bigram already counted in our measure of overall political risk PRiskit (eq. 1), that is, a political but not non-political bigram that is also near to a synonym for risk and uncertainty—the nearest bigram for which 1[b ∈ ℙ \ ℕ] × 1[|b r| < 10] > 0. Both bigrams (p and b) are again weighted with their term frequencies and inverse document frequencies.

Because we must now distinguish between multiple political topics, b's inverse document frequency, log( Z / fb,Z ) adjusts each bigram's weighting for how unique its use is to the discussion of a specific topic compared to all the other political topics, where fb,Z is the number of libraries in ℤ that contain bigram b. For example, a bigram that occurs in all topic-based political libraries is not useful for distinguishing a particular topic and is thus assigned a weight of log( Z / Z ) = 0. By contrast, this weight increases the more unique the use of this bigram is when discussing topic T, and is highest (log(Z / 1)) for a bigram that is used exclusively in discussion of topic T.


Reference & details

Non-Political Risk

We measure the firm's exposure to non-political risk in the same way as PRiskit (eq. 1), but count and weight non-political bigrams rather than political bigrams, that is, ℕ \ ℙ rather than ℙ \ ℕ.


Reference & details