Technical Report on Decentralized Multifactor Authentication

Abstract With the contemporary surge in the storage and transmission of high-value digital data, like medical records and financial transactions, securing that data is more crucial than ever. To mitigate the risks associated with centralized systems, decentralized approaches distribute this data across different servers in various trust domains. However, the benefits of this approach cannot be realized if the mechanisms that govern access to data are still centralized. This document dives into decentralized authentication’s challenges, emphasizing the necessity for robust multi-factor authentication (MFA). It reviews legal aspects by drawing from top standards, sets the context by exploring a range of cryptographic techniques at a high level, and focuses in detail on two novel multi-party computation (MPC) protocols for MFA. Finally, it addresses engineering challenges associated with decentralizing widely used factors (such as email and phone verification) and embracing wallet ownership validation. 1 Introduction More and more types of sensitive and high-value data, such as medical records, financial transactions and genome material, are being stored digitally. In order to avoid a central point of failure, this information can be distributed over n servers hosted in different trust domains, so that a malicious attacker would have to compromise at least several of them to get access to the data. Accessing and using this data then requires a decentralized authentication protocol. Traditional authentication systems face inherent challenges that arise from their centralized nature. Authentication against a central server introduces a single point of failure and requires trust in this centralized entity. Web3 is an open web built on decentralized technology such as blockchains. In Web3, centralized servers are no longer the linchpin of authentication. Instead, users typically authenticate to a blockchain by proving ownership of the private key of a wallet. This decentralized approach eliminates the need to trust a central server, offering a resilient and trust-minimized authentication process with no single point of failure. However, the shift to a decentralized architecture introduces a gap in multifactor authentication (MFA) [54, 40]. Recognized as a best practice, MFA involves the use of several authentication factors, categorized into four types: (1) something you know, such as a password, (2) something you have, like a unique identifier from a mobile device, (3) something you are, such as a facial biometric, and (4) something you do [54]. The proof of possession of a single factor in Web3, the private key, lacks the additional security provided by the combination of different factors in MFA. For example, should the private key be lost or stolen, the absence of a mechanism to regain control of the account poses a significant challenge. Designing an MFA decentralized solution is an ongoing challenge since some of the traditional MFA factors are hard to decentralize. For example, email verification (something you have) typically 1 involves a central server sending an email with a link to the user, but this is a centralized process that is not easy to decentralize. Moreover, given the sensitive nature of MFA information, we need a decentralized authentication solution that preserves data confidentiality, which suggests the use of privacy enhancing technologies (PET). Zero-knowledge proofs (ZKP) are one of the most prevalent PETs used in current authentication solutions as they seamlessly adapt to the client-server architecture with its prover/verifier structure, enabling users to prove to a verifier server possession of authentication factors without compromising sensitive information [56]. But they need to be adapted to be used in the context of a decentralized architecture. Indeed, the verifier server, which constitutes a single-point-of-failure and requires trust, needs to be replaced by a network of servers. The distributed essence of multi-party computation (MPC) naturally accounts for this without requiting protocol modifications [26]. In addition, most MPC protocols can be proven to be secure in the UC framework [14], which allows combining MPC protocols that verify different factors into one secure protocol that verifies their combination without leaking intermediary results. For these reasons, in this report we focus on MPC-based solutions in order to address the general question of how to support MFA on a decentralized infrastructure. 2 Related Work In this section, we explore the application of relevant privacy-enhancing technologies for authentica tion purposes, with a specific focus on decentralized authentication. 2.1 ZKP Azero-knowledge proof is a cryptographic technique allowing one entity (the prover) to demonstrate knowledge of a specific value to another entity (the verifier) without divulging the actual value. An illustration of this is proving that you know a password without disclosing it. A more advanced example is proving that your fingerprint at login time is the same (i.e. similar enough) to the one that was registered during enrollment without disclosing any of the two. These two examples illustrate the use of ZKP for exact [47] and approximate matching [68, 35], respectively (see Section 4). ZKPs, like other privacy-enhancing technologies, offer protection against eavesdropping-based attacks such as man-in-the-middle (MiTM), IP spoofing, denial of service (DoS), and replay attacks when data is transmitted over an untrusted network. In a client-server (i.e. centralized) architecture, ZKPs are seamlessly implemented with the client serving as the prover and the server as the verifier. By distributing the verifier across a network of nodes, authentication against decentralized networks, such as blockchains, becomes achievable using ZKP techniques [50, 1, 59, 62]. For example, Decentralized Identifiers (DIDs) [63] are a foundational component of decentralized identity systems, providing a way to create and manage globally unique identifiers that are not tied to a central authority, for instance on a blockchain. Verifiable Credentials leverage DIDs to enable secure and privacy-preserving sharing of digital credentials [64]. The integration of Zero Knowledge Proofs (ZKPs) in the context of Verifiable Credentials aims at enhancing privacy by allowing individuals to prove the validity of their credentials without revealing the underlying sensitive information, contributing to a more secure and user-centric digital identity ecosystem. However, at least two noteworthy challenges arise in practice when applying ZKP to decentralized authentication. Firstly, not all ZKPs are secure under composition [49], which is crucial for combining multiple protocols in multi-factor authentication (MFA) in a safe way. Composability within the Universal Composability (UC) framework [14, 15] is highly emphasized in pragmatic cryptography engineering for several compelling reasons. It serves as the recommended best practice due to its role in ensuring 2 that cryptographic building blocks seamlessly integrate without interference, both within a single protocol and when concurrently used in different instances. The adoption of universally composable (UC-secure) Non-Interactive Zero-Knowledge (NIZK) protocols becomes crucial in designing larger cryptographic systems [38, 48], offering a ”worry-free” approach akin to using ”ideal boxes”. Despite its theoretical foundation, UC is of vital importance in practical cryptography, providing protection against numerous perils and subtle attacks that may arise when composability is overlooked. The adherence to composability principles, especially within the UC framework, facilitates the robustness and security of cryptographic systems in real-world applications. Secondly, When decentralized networks, particularly blockchains, are involved, additional chal lenges emerge due to regulatory compliance issues arising from the immutability of blockchain records. Indeed, the unalterable nature of blockchain entries complicates the correction of errors or updates in credential status. Furthermore, the public visibility of blockchain data may clash with regulatory mandates, such as the rights to erasure and restriction of processing, whereas immutability could hinder the exercise of rights such as the right to rectification and data portability among others [30, 36]. 2.2 Biometric hashing Biometric hashing is a technology akin to conventional cryptographic hashing, serving as a one-way function to transform biometric data into an irreversible cryptographic hash instead of a biometric template. Contrary to cryptographic hash functions, where close inputs are not mapped to close outputs, most biometric hash functions establish a correlation between hash codes in the hash space and the similarities among biometric samples in the original space using techniques such as matrix mapping or learning-based methods. This correlation is called a relation preserving property [75]. This way, as long as the presented biometric data during login is similar to the one submitted during user enrollment, the obtained hashes are also similar. Due to its irreversibility, low computational cost, and high storage efficiency, biometric hashing is an interesting technique in privacy-preserving biometric recognition systems [60, 65, 43]. However, the relation preserving property requires striking a balance between the system’s ability to hash similar inputs to similar outputs and the underlying security of the hash function. In recent years, many reconstruction attacks based on exploiting the relation preserving property have appeared [67, 29, 27, 44], allowing the attackers to reconstruct the underlying biometric information from the binary hash codes. 2.3 HE Homomorphic encryption (HE) is an encryption scheme allowing operations between the message and ciphertext spaces. Depending on the scheme, it may support homomorphic addition (a + b = Dec(Enc(a) +Enc(b))) and/or homomorphic multiplication (a ×b = Dec(Enc(a)×Enc(b))), where a and b are messages, Enc and Dec are encryption and decryption algorithms, and the operations on ciphertexts may differ from those on plaintexts. Partially homomorphic encryption (PHE) supports unlimited operations of one type [55], somewhat homomorphic encryption (SHE) supports a limited number of operations for both types [12], and fully homomorphic encryption (FHE) supports an unlimited number of operations for both types [31]. HE techniques have been mostly employed to cover biometric authentication factors: Face recognition [61], iris recognition [53], fingerprint recognition [73], voice recognition [57], signature recognition [32], multi-modal factors [70]. Besides the great amount of research effort, HE has been mostly employed in the client-server model, where, in summary, the client provides encrypted information about the authentication factor which is then compared with the encrypted information 3 held in the server. All thirty works analysed in this recent review [72] only address this two-party case. A key security challenge inherent in the client-server model, specifically in centralized Homomor phic Encryption (HE) solutions, revolves around the handling of cryptographic keys. Typically, HE encryption employs a public key for encryption and assigns the responsibility of safeguarding the corresponding secret key for decryption to the client. This places a significant burden on the client, who is often not a security expert and tends to be the more vulnerable party. Consequently, the risk of theft or loss of the private key is increased. Additionally, as discussed in [72, Section 3.5], HE may suffer potential attacks ranging from side-channel attacks, lattice attacks and others. In terms of performance, Homomorphic Encryption (HE) schemes demand substantial computing resources due to their inherent high computational complexity. Notably, certain biometric authen tication methods utilizing HE exceed a 10-second duration [72]. It is important to highlight that authentication time has to adhere internet-wide timeouts to prevent session closure. For instance, Google’s products, such as Gmail’s SMTP server with a timeout of 10 seconds [66], and Google’s content delivery networks with a default timeout of 5 seconds extended to 15 [33], exemplify cases where connection timeouts play a crucial role in system responsiveness. 2.4 Trusted Execution Environments A Trusted Execution Environment (TEE) is an environment where the code executed and the data accessed are isolated and protected by a secure enclave. TEEs are increasingly common and are also used in the context of biometric authentication; this includes Apple’s Touch ID and Face ID [3], BTAA which is a blockchain and TEE-assisted authentication [52] and TEEBAG which requests the remote device to send the stored authentication templates to the TEE. [7]. However, commercial implementations of TEEs have several important vulnerabilities across existing systems. First, critical implementation bugs such as buffer overflows inside Trusted Applications (TAs) [17]. Second, architectural deficiencies such as ill-implemented memory protection mechanisms [28]. Third, overlooked hardware properties such as side channels make TEEs vulnerable to attacks such as cache-timing attacks, as processors are very complex [34]. 2.5 MPC In an ideal decentralized environment involving n servers, a secure yet less efficient approach requires a client to authenticate themselves individually with each server using M multiple authentication factors. This simple method poses a trade-off between usability and security. Secure Multiparty Computation (MPC) [18, 10, 21] emerges as a promising technology to overcome this inherent dilemma. MPC serves as a privacy-centric solution with the capacity to introduce decentralization, addressing the challenges posed by conventional methods. This innovative primitive enables a group of parties to collaboratively evaluate a predefined function while ensuring the privacy of their respective inputs. Traditionally, the focus of MPC systems has predominantly revolved around addressing privacy concerns related to authentication, with a notable emphasis on the two-party setting, where server client authentication is paramount [13, 74]. Two-party secure computation protocols, especially those exploring biometric factors such as fingerprints [39] utilizing FingerCode representation [42] and facial recognition [58] employing EigenFaces [69], have garnered significant attention. As reported in [13], secure fingerprint authentication takes approximately 1 second, while secure facial recognition authentication averages around 6.5 seconds. Other works delve into expanding the type of authentication factors, facilitating secure multi-modal biometric authentication. For instance, 4 SEMBA[9] seamlessly integrates information from face and iris modalities. Unlike earlier approaches leveraging garbled circuit techniques [10], SEMBA implements the MPC framework through the SPDZ protocol [21], yielding secure authentication mechanisms with an online runtime ranging from 30 ms to 120 ms. It is crucial to note that these timings, however, do not account for communication and round complexity, as the client and server are co-located on the same machine. To harness the decentralization potential of MPC technology and remove the authentication burden from the user without compromising security, one can adopt insights from [4, Section 6.2] regarding authentication methods. The study [4] outlines the applicability of MPC for both password based and biometric authentication. In these scenarios, MPC facilitates a secure comparison between the user-provided password/feature vector and its encrypted counterpart on the server. Extending this concept to a distributed environment involves utilizing a secret-shared password/feature vector across n servers, necessitating only one interaction between the client and servers per authentication factor M. In a parallel research trajectory, MPCAuth [66] exploits the decentralization feature of MPC to achieve a similar outcome: Enabling a client and a consortium of n servers to emulate the client’s authentication as if directed to a single logical server. This innovative approach employs TLS as an MPC protocol and applies it to various authentication factors, including email, SMS, universal second factor, biometrics, and security questions. MPCAuth not only introduces the concept but also provides an implementation of its core protocol (TLS-in-SMPC protocol) alongside the associated authentication mechanisms. They assume a round-trip time of 20 ms and, for email authentication, assuming an established TLS-in-SMPC connection, they report an offline runtime of 3.69 s and an online runtime of 430 ms in a 5 server setting. The end-to-end system for both TLS-in-SMPC connection and authentication has an offline runtime of 18.52 s and an online runtime of 1.71 s in the same 5 server setting. 3 Multi-Factor Authentication Several regulations mandate or recommend the use of multifactor authentication (MFA) across various industries to enhance security. Some prominent regulations include • The Payment Services Directive 2 (PSD2): Applies to financial institutions in the European Union and requires strong customer authentication (SCA) [6] for electronic payments [24]. • General Data Protection Regulation (GDPR): Applies to organizations handling personal data of EU citizens and encourages the use of robust security measures, including MFA, to protect personal information [30]. • Health Insurance Portability and Accountability Act (HIPAA): Applies to healthcare organi zations in the United States, emphasizing the need for secure access controls, including MFA, to protect patients’ health information [37]. • Banking: Regulatory initiatives in various countries, such as the anti-money laundering (AML) directives [25] or the UK’s Open Banking, require financial institutions to implement SCA for secure access to financial services. Alongside these regulations, several standards set guidelines and best practices around the use of MFA. Two prominent ones are NIST SP 800-63- Digital Identity Guidelines [54] and ISO/IEC 29115– Entity Authentication Assurance Framework (EAAF) [40]. Both standards describe several levels of assurance (LoA) indicating the degree of confidence in the identity or credential. For example, 5 NIST SP 800-63 contemplates three different LoAs that specifically address the strength of the authentication process used to verify the identity of the individual: AAL1 (Little or No Confidence) is a single-factor authentication, typically using a password, AAL2 (Some Confidence) involves two factors, and AAL3 (High Confidence) requires MFA, with higher assurance in the authentication process. NIST SP 800-63 contemplates four different types of authentication factors: • Something You Know: This includes knowledge-based factors such as passwords, personal identification numbers (PINs), or answers to security questions. • Something You Have: This includes possession-based factors such as a physical token, a smart card, or a mobile device. • Something You Are: This includes biometric factors such as fingerprints, voice recognition, retina scans, or facial recognition. • Something You Do: This includes behavioral factors such as typing patterns, or other behavioral biometrics. The selection and number of authentication factors for a given application depend on factors such as the application’s risk tolerance, the sensitivity of the data involved, and the user experience requirements. Finding equilibrium between security and usability is crucial, and aspects such as the potential impact of a security breach and the user population’s familiarity with specific authentication methods should be taken into account in the decision-making process NIST SP 800-63 and ISO/IEC 29115 describe three essential phases: Enrollment, credential management and authentication. Enrollment involves user onboarding, identity proofing (see ISO/IEC Technical Specification 29003- Identity Proofing [41]), and, upon successful completion, the creation of a user account and entry in a register. Credential Management includes overseeing the life-cycle of a credential for a registered user, encompassing creation, issuance, activation, usage, revocation, and destruction/archiving. Authentication pertains to the operational use of a credential by the user. In this report, we focus on three specific steps within the three phases of enrollment, authentica tion, and credential management:

  1. Account registration within the enrollment phase: User account registration results in the creation of a user account and credentials. These credentials typically include at least a User Id and password but can encompass additional factors or attributes.

  2. Login within the authentication phase: Returning users undergo returning user authentication (RUA), presenting and validating different credentials according to the risk model.

  3. Change factor within the credential management phase: The user is able to change one credential (i.e. a factor) by relying on the successful verification of other existing credentials. All the steps described above need to take into account the following security considerations. First, they need to be supported by a decentralized network of nodes where no node acts as a leader or authority. Second, they need to preserve confidentiality of the underlying credentials, so that they are not revealed to any number of colluding nodes below a threshold t. Third, they need to employ one-time randomness to mitigate risks such as man-in-the-middle (MITM) and replay attacks, and make use of information-theoretic cryptography in order to prevent brute force attacks. Additional desired requirements for the underlying technology are the preservation of data integrity, availability and resilience of processing. 6 4 Exact and Approximate Matching for MFA via MPC Most operations on MFA factors involve exact or approximate matching. Exact matching can be used for instance to support password authentication, or to check if a user’s device identifier is correct. Approximate matching can be used to support different types of biometric authentication, based for example on fingerprints, voice recognition, retina scans, or facial recognition. In this section we discuss the MPC approach to these two types of matching. Exact matching determines whether x = y for secrets x,y. In some instances such as password authentication, one of the two secrets has already been stored in the network at registration during enrollment time. At the MPC level this last detail makes little difference, so in Subsection 4.2 we discuss the general problem of exact matching instead. Approximate matching determines whether two vectors are sufficiently close, according to some distance function d(·,·). Once more, in some cases one of the vectors has already been stored in the network during enrollment time, but we abstract ourselves from this implementation detail and present the general problem of approximate matching instead. This is discussed further in Subsection 4.3. Passive security in MPC involves defending against adversaries who corrupt participants without altering the protocol, often referred to as honest-but-curious or semi-honest adversaries, whereas active security extends protection to adversaries capable of arbitrarily deviating from the protocol, potentially modifying their code to exploit vulnerabilities, also known as malicious or Byzantine adversaries. Active adversaries in MPC pose a greater threat by having the capability to modify the output of a computation, introducing an additional layer of potential manipulation. For this reason, we focus on efficiency for active security in the dishonest majority setting. In any case, we’re assuming that the underlying LSSS-based MPC protocol is capable of securely generating sharings of random numbers, outputting them to the client and computing multiplications. In this case, state-of-the-art MPC protocols for online multiplication, used in both exact and approximate matching, require at least one round [21, 45, 46]. In order to eliminate these online rounds we make use of the protocol described in the next section. 4.1 The Underlying MPC Protocol Fast user authentication is crucial for a seamless user experience, enhancing user satisfaction and efficiency. Minimizing wait times ensures users can access the desired services promptly, contributing to overall system usability and productivity. For this reason, we make use of the MPC protocol introduced in [71] in order to minimize user wait times. This protocol demonstrates an optimal online phase in terms of communication rounds, referring to the operations requiring user presence. In essence, it minimizes the user wait time by requiring communication only for sending inputs and receiving computation results, without necessitating network nodes to exchange messages during the computation, thereby optimizing the overall protocol execution time. Protocol [71] introduces the concept of a masked factor ⟨x⟩λ hiding a non-zero secret x ∈ Z∗ p with independent uniformly random mask exponent λ ∈ Zp−1: It is given as ⟨x⟩λ := x· g−λ ∈ Z∗ p, where g is a public generator in Z∗ p. This protocol can evaluate an arbitrary (multivariate) polynomial with optimal online round complexity. In its pre-processing phase, it requires for each monomial the computation of a correlation pair, which requires exponentiating a public generator g by a secret shared exponent γ. Each one of these can be computed with passive security with an exponentiation protocol [2] requiring n − 1 fan-in 2 multiplications. Both the exact matching and the initial component of the approximate matching problem can be expressed in terms of a polynomial. We notice that in both problems, the fan-in of the products that 7 arise in every monomial of the polynomial is low (i.e. the degree of the monomials is low). Hence instead of pre-computing sharings [gγ] using the exponentiation protocol, we do pre-processing by multiplying invertible random values, reducing the number of pre-processed multiplications. Specifically, the protocol for a monomial [xj1 1 ···xjk k ] is as follows: Precompute a product of random numbers [rj1 1 ···rjk k ], optionally check if these numbers are all invertible, and let the clients submit ⟨xi⟩ := r−1 i xi. The nodes then locally compute the product [xj1 1 ···xjk k ] = ⟨x1⟩j1 ···⟨xk⟩jk[r1 ···rk]. The general claim for an arbitrary polynomial then works by extending linearly. 4.2 Exact matching In order to determine whether two secrets x and y are the same, it suffices to determine whether x−y is nonzero. Naively revealing the value x−y leaks information on x and y however, so instead the value r(x −y) for r a nonzero random number will be computed. Ordinarily, the approach in a linear secret sharing scheme (LSSS)-based MPC system would then be as follows: Exact match in one round, πExactMatchOneRound. [r(x − y)] ← πExactMatchOneRound(x,y) Preprocessing:

  4. The nodes compute a sharing [r] of a random value r.

  5. They determine whether r might be zero; if so, restart. Usually the approach, dating back to [8] is: Multiply by another random value r′ and reveal the result. Online:

  6. The clients send [x] and [y] to the nodes (one or both of them possibly long in advance, so retrieved from storage instead).

  7. The nodes locally compute the sharing [x − y] = [x] −[y] using linearity of the LSSS.

  8. They then compute the multiplication [r(x − y)] = [r] · [x − y]. Ordinarily the cost would hence be as follows: 2 random values and 1 public multiplication [16] in preprocessing, and 1 multiplication in the online phase. By making use of the MPC protocol in Subsection 4.1, we can get rid of the online multiplication with a bit of pre-processing as follows, under the very reasonable assumption that both x and y are nonzero: 8 Exact match in zero rounds, πExactMatchZeroRounds. [r(x − y)] ← πExactMatchZeroRounds(x,y) Preprocessing:

  9. The nodes compute sharings [r],[rx],[ry],[r′],[r′′] of five random values r,rx,ry,r′,r′′.

  10. They multiply [rxr] = [rx] · [r], then use a public multiplication to compute rx · r · r′. If this value is zero, restart. Similarly, they multiply [ryr] = [ry] · [r], then use a public multiplication to compute ry · r ·r′′. If zero, restart.

  11. The nodes send [rx] to the input client of x and [ry] to the input client of y, who reconstruct rx and ry respectively. Online:

  12. The clients broadcast ⟨x⟩ := r−1 x x and ⟨y⟩ := r−1 y y to the nodes (one or both of them possibly in advance, and retrieved from storage instead).

  13. The nodes locally compute the sharing [r(x − y)] = ⟨x⟩[rxr] − ⟨y⟩[ryr] using linearity of the LSSS. Preprocessing cost is thus 5 random values, 2 multiplications and 2 public multiplications, and no multiplications in the online phase are required. Remark 4.1. The first public multiplications can be skipped in the last protocol, the trade-off being that a small probability arises (in preprocessing) that the client complains that the value rx is zero and it needs to be remade. 4.3 Approximate matching For approximate matching, the aim is to compute a sharing of the inequality [d(x,y) < T], where x,y are secret shared values coming from clients or storage, d(·,·) is some distance function and T is a threshold which is usually allowed to be public but may also be kept secret if preferred. In the context of biometrics, the most common choice for d(·,·) is the Euclidean distance; equivalently one may consider its square, which is easier in the setting of MPC. Other choices might include the cosine, Mahanoblis and Manhattan similarity distances. LSSS protocols in the dishonest majority setting ordinarily require an extra round of communi cation to compute the squared Euclidean distance d(·,·), which we can remove as before. Naively doing this for the polynomial ℓ i=1z2 i where each zi will be equal to xi − yi is problematic, as it would leak whether xi = yi. Instead, we will ask the clients to secret-share ℓ i=1x2 i and ℓ i=1y2 i , and compute −2 ℓ i=1xiyi as before. To ensure that nothing is leaked in case xi or yi is zero, simply shift them both by 1; this does not affect d(x,y).1 1In order to prevent overflows, the unshifted inputs xi should be less than zero-knowledge proofs. (p −1)/ℓ, which can be ensured with 9 Euclideandistancesquaredinzerorounds,πEDSZeroRounds. [d(x,y)]←πEDSZeroRounds(x,y) Preprocessing: 1.Thenodescomputesharings[rxi ],[ryi ]of2ℓrandomvaluesrxi ,ryi . 2.Theyperformℓmultiplications[rxi ryi ]=[rxi ]·[ryi ]. 3.Thenodessendthe[rxi ] totheinputclientofxandthe[ryi ] totheinputclientofy,who reconstructthevaluesrxi andryi respectively. Online: 3.Theclientsbroadcast⟨xi⟩:=r−1 xi xi and⟨yi⟩:=r−1 yi yi tothenodes, inadditionto[ ℓ i=1x2 i] and[ ℓ i=1y2 i ] (asbefore,potentiallyinadvance). 4.Thenodeslocallycomputethesharing [d(x,y)]=[ ℓ i=1 x2 i]+[ ℓ i=1 y2 i]−2 ℓ i=1 ⟨xi⟩⟨yi⟩[rxi ryi ] usinglinearityoftheLSSS. Thepre-processingcostisthus2ℓrandomvalues,ℓmultiplications,andnomultiplicationsin theonlinephasearerequired.Asbefore, ifonewantstoeliminatetheriskofclientscomplaining thatsomeoftherxi orryi arezero,theprotocol iseasilymodifiedtouseanadditional ℓrandom valuesandℓpublicmultiplicationstoensurethatthisisnotthecase. Regardingtheonlinecomputationof thesubsequentcomparison[d(x,y)<T] (notshownin theprotocolabove),theCatrina-DeHooghprotocol [16]resultsina3or4onlineroundprotocol, dependingontheunderlyingring. Anattackermighttrytotrickthecomparisonprotocolbyoverflowingintoasmallvalue;although this isunlikelytowork,onecoulddemandzero-knowledgeproofstoforcethetheinputvectors lie inaspecificrange,sonooverflowscanoccur. 5 DecentralizingMFAFactors InthissectionweadaptthepreviousMFAconceptstothedecentralizedcase.Wefocusonthree steps(accountregistration,returninguserauthenticationandchangefactor)withinthethreephases ofenrollment,authenticationandcredentialmanagementcontemplatedinNISTSP800-63and ISO/IEC29115. AccountRegistration.Duringaccountregistration,oftenreferredtoassignup,theuserregisters credentials that fall intooneormoreNISTSP800-63authenticationfactor types. Themain differencecomparedtoacentralizedsolution, isthattheauthenticationfactorsarenotstoredina singleserver,but inanetworkofnodes.Also, inanMPC-basedsolutiontheauthenticationfactors arenot sent inplaintext tothenetworknodes,but ratherusingsome formof threshold-based privacy-preservingmechanism(e.g. secretshares inLSSSMPC[21,20,22], encryptioninFHE 10 MPC [19], or encoding in garbled circuits MPC [11, 51]). The threshold t determines the maximum number of network nodes that may collude without being able to reconstruct an authentication factor. The main feature of MPC is that it allows the network nodes to work with the authentication factors without ever having to reconstruct them, for example during returning user authentication. Returning User Authentication. This function, often referred to as login or sign in, requires the user to verify a number of factors of different types in order to prove that they are the same user that created the account. In a decentralized solution, this verification is done against each one of the network nodes, so that they can independently conclude that the user is the same. In the MPC case, since factors are stored by the network using some privacy-preserving mechanism, verifying a factor requires running an MPC protocol. In many cases, the purpose of this protocol is to determine whether the credential presented during registration and login are exactly the same. This is the case for instance for factors such as passwords and device identifiers. In some cases, more complex computations are required. For example, in biometric authentication an MPC protocol is typically required to compute the distance between the registration and login factors and to compare its result against a threshold. Since the MPC protocols we propose are secure in the UC framework [14], we can combine different exact and approximate matching protocols into a single secure MPC protocol. This way, we do not leak the partial result obtained from each factor match, but only the final pass/fail result. Changing Factors. Users might want to change an authentication factor for a number of reasons, including a new, lost, or stolen device, a forgotten password, to increase security, or to comply with regulations. Best practices recommend changing one factor at a time, and doing so by authenticating through a number n of other factors, where n depends on the trade-off between risk tolerance and user experience. In order to minimize risk, the selected n factors should cover as many different NIST SP 800-63 authentication types as possible. 5.1 Biometric Authentication Biometric authentication validates a user’s identity by leveraging distinctive biological attributes like fingerprints, voice, retina, and facial features. The essence of biometric authentication lies in securely storing this unique biological information, allowing for robust identity verification whenever a user seeks access to their account. If a password is hacked, a new one can be generated, whereas this is not the case for biometric information, which is therefore considered as very sensitive data. For this purpose, and to minimize the risk of this information being hacked or used by a central authority without our permission, it makes sense to store it in a secure decentralied network. The standard biometric authentication process unfolds in two distinct stages: Initial biometric data capture on the client side and subsequent verification within the network. Initially, the client captures diverse biometric data, such as fingerprints, iris scan, or facial features. This data undergoes encoding such as through a neural network, which transforms it into a logical representation called a feature vector or a biometric template. This is in turn translated into a fixed-point rational representation compatible with finite field operations. The encoded information is then securely shared across the network using some form of information hiding mechanism, such as a linear secret sharing scheme. Subsequently, the network calculates the Euclidean distance between the stored biometric feature vector from the registration phase and the biometric feature vector uploaded during the login attempt utilizing an MPC protocol so that the biometric information is never reconstructed. The MPC protocol also compares this distance to a predetermined threshold, declaring the login successful if the distance falls below the specified threshold.

Last updated