
WEIGHT: 49 kg
Breast: E
One HOUR:130$
Overnight: +30$
Services: Sex anal, Pole Dancing, Domination (giving), Food Sex, Extreme
In this work, we suggest extending the ST framework to address the risks of social misattributions in Large Language Models LLMs , particularly in sensitive areas like mental health. This addition aims to bridge the gap between LLM capabilities and user perceptions, promoting the ethically responsible development and use of LLM-based technology. Research has recently started investigating artificial intelligence AI under a socio-technical lens, attempting to contextualize this technology within its broader social and organizational environment.
Prominently, Ehsan et al. Our argument goes as follows. First, following Shananan et al. Roles are the expected behaviors of the LLMs within the socio-technical context they operate in. LLMs can follow role and persona assignmentsβin short: social attributions βthanks to their notable capability to perform a variety of downstream tasks in different contexts [ 2 , 9 , 5 ]. In particular, this capability affords humans a certain degree of flexibility in performing social attributions of LLMs.
This is a source of notable risks. On the other hand, users may perform different attributions depending on their perception of the abilities of the LLMs. However, these attributions may descend from incorrect perceptions of the objective capabilities of an LLM. This problem, we argue, is not restricted to the case of humans interacting with LLM-based applications. These risks include a higher propensity of being nudged, the reinforcement of negative behaviors [ 22 ] , cases of epistemic injustice [ 17 ] , wrong accountability attributions [ 10 ] , and, in general, unwarranted trust in LLM-based applications [ 16 , 11 ].
Hence, developing effective strategies to counter social misattributions of LLMs is key to develop LLM-based applications responsibly.
LLMs are a type of generative AI performing context-aware text generation [ 22 ]. Authors state that these models answers user queries similarly to an autocomplete function of highly sophisticated search engines [ 12 ]. However, this perspective on the function of LLMs is somehow limiting. In fact, LLMs serve different functions. To elaborate on this point, following Crilly [ 6 ] , we promote a separation between technical and non-technical functions of AI technology.