In the contemporary landscape, technological innovations are not merely introducing new functional tools into social life; rather, they act as vectors of structural transformation that deeply affect institutional organization, normative frameworks, and everyday practices. The rise of technologies such as artificial intelligence, algorithmic automation, big data, and digital infrastructures presents unprecedented challenges for critically analyzing social life—particularly with regard to the redistribution of power, the transformation of individual and collective agency, and the redefinition of citizenship, participation, and autonomy.
Understanding these processes requires recognizing that technology is neither neutral nor detached from social conflict. Digital platforms, algorithmic surveillance systems, and data infrastructures generate new regimes of visibility, exclusion, and control, while also enabling novel forms of communication, collective action, and knowledge production. Within this context, it becomes essential to examine how these technologies shape the conditions of possibility for social action, and what principles should guide their design, implementation, and regulation in pursuit of effective digital justice.
From a critical perspective, this thematic line invites reflection on the normative foundations that should guide the integration of technological innovation into social life. This entails examining risks associated with algorithmic opacity, the erosion of privacy, the concentration of informational power, and the potential reproduction of inequality through automated systems. At the same time, it calls for exploring ways in which technology can be reoriented toward emancipatory goals, fostering processes of democratization, institutional transparency, and equitable access to the benefits of digitalization.
In this light, issues such as procedural justice in digital environments, algorithmic traceability, the protection of individual autonomy in the face of automated decisions, and the development of regulatory frameworks that account for both technical complexity and the social values at stake become particularly relevant. The horizon of this reflection is not merely technical, but also political and ethical: it is a matter of interrogating the conditions under which innovation can be articulated with a public rationality grounded in mutual recognition, inclusion, and respect for fundamental rights.
This thematic focus therefore invites research that addresses the relationship between technology and society from critical, interdisciplinary, and normatively grounded perspectives. Contributions are encouraged that combine theoretical analysis with empirical studies on the social, cultural, and political effects of emerging technologies, as well as proposals that help envision more just, open, and sustainable alternatives for the design and use of digital tools.
Among others, we propose the following guiding questions:
1. How do artificial intelligence systems reshape citizenship, individual autonomy, and collective decision-making?
2. What normative principles should guide the ethical regulation of algorithms in both public and private contexts?
3. In what ways do digital technologies transform the structure of the public sphere, visibility regimes, and the processes of building shared meaning?
4. How can we address the tensions between innovation, privacy, and social control in hyper-digitalized societies?
5. What experiences, institutional frameworks, or social practices offer alternative models of technological appropriation centered on the common good?
Finally, TH+In welcomes contributions that, even if they extend beyond the themes outlined above, are aligned with the broader concern for understanding the impact of technological innovation on the configuration of contemporary social life. We especially encourage submissions that explore emerging phenomena, gray areas, or developments that remain under-examined but are crucial to critically rethinking the place of technology in our societies.