Enacted Technologies as Epistemic Infrastructure:
How Information Systems Shape Institutional Knowledge Production
Guy-Maurille Massamba
Introduction
When institutions adopt information technologies, they are conventionally understood to be acquiring instruments for managing, processing, and communicating information that exists independently of the systems designed to handle it. On this view, a database records facts about the world, a classification system organizes pre-existing categories, and an information management platform facilitates the flow of knowledge that would otherwise move through slower or less efficient channels. Technology appears as epistemically neutral infrastructure—a conduit for knowledge rather than a constituent of it.
This understanding, while intuitive, obscures a more fundamental relationship between information technologies and institutional knowledge. As scholarship in science and technology studies, critical information studies, and organizational theory has increasingly recognized, technologies do not merely transmit or store information; they actively shape what can be known, what counts as knowledge, and how knowledge claims are evaluated and justified (Bowker & Star, 1999; Jasanoff, 2004; Orlikowski, 2000). The categories embedded in a database determine what distinctions are cognizable; the metrics captured by an information system determine what is measured and what escapes measurement; the interfaces through which users interact with data shape the questions that can be asked and the forms that answers can take.
This essay develops the concept of enacted technologies as epistemic infrastructure, synthesizing Fountain's (2001) framework of technology enactment with scholarship on the knowledge-constituting functions of technical systems. The central argument is that understanding how technologies are enacted—how they are implemented, adapted, and integrated into institutional practices—is essential to understanding the epistemic landscape of contemporary organizations and governance. Technology enactment is not merely an organizational or political process; it is fundamentally an epistemological one, with profound implications for what institutions can know and how they can reason about the domains they govern.
Technology Enactment and Its Epistemic Dimensions
Fountain's (2001) influential framework distinguishes between objective technologies—the material artifacts of hardware, software, and networks as designed—and enacted technologies—those same artifacts as actually implemented and used within specific organizational and institutional contexts. This distinction challenges technologically deterministic accounts that attribute inherent effects to technical systems, emphasizing instead that outcomes depend on processes of interpretation, adaptation, and integration shaped by organizational structures, institutional norms, cognitive frames, and political dynamics.
While Fountain's analysis focuses primarily on organizational and institutional variables, the enactment framework has significant epistemic implications that warrant explicit elaboration. The process of enacting a technology involves not only decisions about organizational workflows and authority structures but also decisions about categories, classifications, metrics, and evidentiary standards—decisions that constitute the epistemic architecture within which institutional knowledge is produced.
Consider the implementation of an information management system in a public agency. The enacted system embodies determinations about what data fields will exist, what values those fields can take, what relationships between data elements will be represented, and what queries the system will support. These technical specifications are simultaneously epistemic specifications: they determine what the agency can record, retrieve, aggregate, and analyze. A field that does not exist in the database represents a phenomenon that cannot be systematically known through that system; a relationship that cannot be queried represents a connection that cannot be investigated; a category that admits no exceptions represents a distinction that cannot be complicated (Bowker & Star, 1999).
The epistemic consequences of these design choices are enacted—that is, they emerge through the interaction of technical specifications with organizational practices, professional cultures, and institutional pressures. The same database schema might be enacted differently across agencies depending on how staff interpret ambiguous cases, how managers incentivize data entry, and how institutional routines incorporate or bypass the system's categories. Understanding the epistemic infrastructure of an organization thus requires attention not only to technical design but to the full process through which technologies become embedded in knowledge-producing practices.
Classification Systems and the Constitution of Knowable Worlds
The knowledge-constituting function of enacted technologies is perhaps most evident in classification systems, which have received sustained attention in science and technology studies. Bowker and Star's (1999) foundational analysis in Sorting Things Out demonstrates that classification schemes are not neutral representations of pre-existing natural kinds but active interventions that shape what can be perceived, communicated, and acted upon. Classifications impose order on heterogeneous phenomena by emphasizing certain similarities while suppressing others; they create boundaries that determine what falls inside and outside categories; they establish hierarchies that privilege certain distinctions over alternatives.
When classification systems are embedded in information technologies, their epistemic effects are amplified and stabilized. A paper-based filing system permits informal workarounds—a clerk might pencil a note indicating that a case does not quite fit its assigned category. A database enforces its categories more rigidly; cases must be assigned to existing classifications, and the system's structure makes it difficult to represent ambiguity or contestation (Star & Ruhleder, 1996). The enacted technology thus tends to naturalize its categories, making them appear as inevitable features of the domain rather than contingent choices among alternatives.
Scott's (1998) analysis of "legibility" in state administration illuminates the political dimensions of this epistemic constitution. Modern states, Scott argues, have systematically imposed standardized categories—uniform surnames, cadastral maps, standardized measures—that render populations and territories "legible" to administrative observation. These classifications do not simply describe pre-existing social reality; they actively reshape that reality to fit administrative categories, often suppressing local knowledge and complexity in favor of synoptic visibility. Information technologies extend and intensify these legibility projects, enabling classification at scales and speeds previously impossible while embedding categorical determinations in technical systems resistant to local modification.
The epistemic politics of classification become particularly consequential when enacted technologies determine access to resources, rights, or recognition. Eubanks' (2018) study of automated eligibility systems in public welfare demonstrates how the categories embedded in these systems determine who can be recognized as deserving assistance and whose needs fall outside the system's epistemic grasp. The enacted technology does not merely process applications according to pre-existing criteria; it constitutes the criteria through its categorical structure, determining what forms of need are cognizable and what forms of evidence are admissible.
Metrics, Measurement, and the Production of Institutional Vision
Beyond classification, enacted technologies shape institutional knowledge through their determination of what is measured and how. The selection of metrics is an epistemic act with profound consequences for organizational attention and reasoning. What is measured becomes visible to institutional decision-making; what escapes measurement remains in epistemic shadow, difficult to perceive, discuss, or act upon (Porter, 1995).
The enactment of performance measurement systems illustrates this dynamic. When an organization implements technologies for tracking outcomes, the specific metrics embedded in those systems come to define what counts as success and failure, progress and regression. Alternative conceptions of performance that resist quantification, or that would require different measurement strategies, struggle to gain traction against the institutional visibility conferred by the enacted system. As Porter (1995) argues in his history of quantification, the appeal of numerical metrics lies partly in their apparent objectivity—their capacity to provide grounds for decision that appear independent of subjective judgment—but this appearance of objectivity obscures the choices embedded in metric selection and operationalization.
Hacking's (1990) work on the "looping effects" of human classification reveals a further epistemic complexity. When measurement systems are applied to human populations, the categories and metrics can reshape the phenomena they purport to measure. Individuals and organizations respond to being measured, adapting their behavior to the criteria embodied in measurement systems. The enacted technology thus participates in constituting the reality it measures, creating feedback loops between institutional knowledge and institutional environment.
Edwards' (2010) study of climate science infrastructure demonstrates these dynamics at global scale. The knowledge claims of climate science depend fundamentally on vast infrastructures of measurement, data collection, and computational modeling. These infrastructures do not simply record a pre-existing climate; they constitute "climate" as a knowable global phenomenon through choices about what to measure, how to calibrate instruments, how to interpolate between measurement points, and how to model relationships between variables. The enacted technologies of climate science are epistemic infrastructure in the fullest sense—without them, global climate as an object of knowledge would not exist.
Algorithms, Automation, and Epistemic Opacity
The increasing prevalence of algorithmic systems introduces new dimensions to the epistemic analysis of enacted technologies. Algorithms embedded in institutional information systems do not merely store and retrieve information; they process, filter, rank, and recommend, actively shaping what information reaches decision-makers and in what form (Introna & Nissenbaum, 2000). The epistemic infrastructure of algorithmic systems includes not only their explicit categories and metrics but also the often-opaque logics through which inputs are transformed into outputs.
Noble's (2018) analysis of search engine algorithms demonstrates how these systems encode and reinforce particular epistemic orderings. Search results are not neutral reflections of available information but algorithmic constructions that privilege certain sources, framings, and perspectives while marginalizing others. When users rely on search engines for information, they encounter an epistemic environment shaped by algorithmic determinations they neither chose nor fully understand. The enacted technology structures not only what information is accessible but what information appears authoritative, relevant, and trustworthy.
The opacity of algorithmic systems poses distinctive challenges for epistemic accountability. Traditional information systems, while embedding consequential design choices, are in principle amenable to inspection and critique—one can examine database schemas, review classification manuals, and assess measurement protocols. Machine learning systems that derive their classifications from training data rather than explicit programming resist this form of scrutiny; their epistemic structure is emergent rather than designed, embedded in statistical patterns rather than articulated rules (Burrell, 2016). When such systems are enacted in institutional contexts, they create epistemic infrastructure whose operations may be opaque even to those who implement and operate them.
This algorithmic opacity is consequential for democratic governance. If citizens are to hold institutions accountable for their decisions, they require some understanding of the epistemic bases on which those decisions rest. When consequential determinations emerge from algorithmic systems whose epistemic structure resists articulation, the conditions for meaningful accountability are compromised. The enacted technology interposes between institutional decision and public scrutiny an epistemic black box whose contents cannot be fully examined.
Epistemic Infrastructure and Democratic Governance
The concept of enacted technologies as epistemic infrastructure has significant implications for democratic theory and practice. Democratic governance presupposes that citizens can access, evaluate, and contest the knowledge claims on which policy decisions are based. When the epistemic infrastructure of governance is embedded in technical systems, the accessibility of that infrastructure to democratic scrutiny becomes a matter of considerable importance.
Jasanoff's (2004) concept of "civic epistemology" provides a useful framework for analyzing these dynamics. Different political cultures, Jasanoff argues, develop distinctive conventions for establishing public knowledge—different standards of evidence, different procedures for validating expertise, different relationships between technical and popular authority. The enacted technologies of governance participate in constituting civic epistemology by determining what forms of evidence are produced, what expertise is required for interpretation, and what possibilities exist for public engagement with technical knowledge claims.
When institutional information systems produce knowledge in forms accessible only to technical specialists, they shift epistemic authority toward those specialists and away from broader publics. When algorithmic determinations replace explicitly reasoned judgments, they may render the epistemic basis of decisions less available for contestation. When the categories of administrative databases structure public discourse about policy problems, they constrain the terms in which political alternatives can be articulated.
These considerations suggest that the design and enactment of institutional information systems should be understood as matters of democratic concern, not merely technical or administrative questions. Decisions about what categories a system will recognize, what metrics it will track, what algorithmic processes it will employ, and what transparency mechanisms it will support are decisions about the epistemic infrastructure of governance—decisions that shape what the polity can collectively know and how it can reason about public problems.
Implications for Institutional Design
Understanding enacted technologies as epistemic infrastructure suggests several principles for institutional design. First, the epistemic consequences of technology design choices deserve explicit attention alongside functional and organizational considerations. System designers and institutional decision-makers should ask not only whether a system will perform its intended functions efficiently but also what epistemic infrastructure it will establish—what categories it will naturalize, what metrics it will privilege, what phenomena it will render visible or invisible.
Second, the enactment process itself requires epistemic scrutiny. Because the same technical system can be enacted in epistemically different ways depending on organizational context, attention must be paid to how systems are actually implemented, how categories are interpreted in practice, how metrics are used in decision-making, and how technical knowledge claims acquire institutional authority. Epistemic auditing of enacted technologies could examine whether systems produce the knowledge they are assumed to produce and whether that knowledge serves the purposes for which it is deployed.
Third, transparency mechanisms should be designed with epistemic accessibility in mind. Making technical systems formally open to inspection is insufficient if the knowledge required to interpret them remains specialized. Meaningful epistemic accountability requires not only access to technical specifications but also interpretive resources that enable non-specialists to understand and evaluate the epistemic infrastructure embedded in institutional technologies.
Fourth, attention to epistemic diversity suggests value in maintaining multiple, potentially competing systems of classification, measurement, and knowledge production. Monocultures of epistemic infrastructure—situations in which a single system determines what can be known across an entire domain—create risks of systematic blindness to whatever that system cannot capture. Preserving space for alternative ways of knowing, including forms of knowledge that resist digitization, can provide epistemic resilience against the limitations of any particular technical system.
Conclusion
Technologies enacted in institutional contexts function as epistemic infrastructure, shaping what institutions can know, what counts as evidence, and how decisions are justified. This understanding extends Fountain's (2001) enactment framework by elaborating its epistemological dimensions and connecting it to scholarship in science and technology studies on classification, measurement, and the social construction of knowledge.
The concept of epistemic infrastructure illuminates the stakes of technology design and implementation decisions. Choices about database schemas, classification categories, performance metrics, and algorithmic processes are not merely technical or administrative; they are choices about the constitution of institutional knowledge, with consequences for organizational reasoning, policy outcomes, and democratic accountability. The enacted technology determines the boundaries of the institutionally knowable, privileging certain phenomena, distinctions, and relationships while consigning others to epistemic invisibility.
Recognizing technologies as epistemic infrastructure reframes debates about digital governance, algorithmic accountability, and the automation of institutional decision-making. These debates concern not only efficiency, fairness, or procedural propriety but also the distribution of epistemic power—the power to determine what can be known and what must remain unknown, what is measurable and what escapes measurement, what is legible to institutional vision and what remains in shadow. In an era of expanding technological mediation of governance, understanding and contesting the epistemic infrastructure of enacted technologies becomes an essential task for both scholarship and democratic practice.
References
Bowker, G. C., & Star, S. L. (1999). Sorting things out: Classification and its consequences. MIT Press.
Burrell, J. (2016). How the machine 'thinks': Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1–12. https://doi.org/10.1177/2053951715622512
Edwards, P. N. (2010). A vast machine: Computer models, climate data, and the politics of global warming. MIT Press.
Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin's Press.
Fountain, J. E. (2001). Building the virtual state: Information technology and institutional change. Brookings Institution Press.
Hacking, I. (1990). The taming of chance. Cambridge University Press.
Introna, L. D., & Nissenbaum, H. (2000). Shaping the web: Why the politics of search engines matters. The Information Society, 16(3), 169–185. https://doi.org/10.1080/01972240050133634
Jasanoff, S. (Ed.). (2004). States of knowledge: The co-production of science and social order. Routledge.
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.
Orlikowski, W. J. (2000). Using technology and constituting structures: A practice lens for studying technology in organizations. Organization Science, 11(4), 404–428. https://doi.org/10.1287/orsc.11.4.404.14600
Porter, T. M. (1995). Trust in numbers: The pursuit of objectivity in science and public life. Princeton University Press.
Scott, J. C. (1998). Seeing like a state: How certain schemes to improve the human condition have failed. Yale University Press.
Star, S. L., & Ruhleder, K. (1996). Steps toward an ecology of infrastructure: Design and access for large information spaces. Information Systems Research, 7(1), 111–134. https://doi.org/10.1287/isre.7.1.111
