IoT security authentication

Authentication in IoT and study of authentication with iot testbed and comparative study of authentication techniques
Dr.MohitBansal Profile Pic
Published Date:26-10-2017
Your Website URL(Optional)
Authentication in IoT The raw math of shifting from IPv4 to IPv6 enables the universe of connectivity to go from the size of a golf ball to the size of the sun. Today, we have not proved our ability to manage security for a golf ball. What are we going to do when we inhabit the “sun”—when every- thing around us is a connection point, and thus an entry point for an attacker? Emily Frye Principal Engineer with MITRE Corporation, Opening statement for the Cyber Security Panel at the 2015 IEEE International Symposium on Technologies for Homeland Security Authentication defends the universe of connectivity against attackers by verify- ing identities at entry points to manage security. This identification applies to both entities that manipulate data and information itself that data carry. Com- municating entities should identify one another. Information exchanged during 327328  Security and Privacy in Internet of Things (IoTs) communication should be validated as regards its origin, time, content, and so on. Therefore, authentication is usually divided into two major classes: entity authentication and message authentication. This chapter, after explaining the fundament of authentication, considers entity authentication and message authentication pertinent to the Internet of Things (IoT) that connects everything around us. For each class, a case study of IoT applications for industries such as transportation or healthcare is examined. The last section considers key management in a body area network (BAN), an IoT application for healthcare. Authentication is often supported with encryption techniques, which in turn require key management. Symmetric-key cryptography has to establish a shared secret key between the two parties who wish to commu- nicate confidentially, and the knowledge of the shared secret can serve the pur- pose of authenticating the participants’ identities. Asymmetric-key cryptography involves a trusted third party to bind the identity of an entity to its public key for other entities to communicate with it confidentially, and the binding serves as the certificate to authenticate the entity. These traditional key management meth- ods are not suitable for a BAN due to the limitation of computation resources and power consumption. The demand for a high level of security in healthcare, for the sake of human lives, challenges the design of the BAN. However, the human body offers unique opportunities for a new authentication methodology with biometrics. 13.1 Fundament of Authentication Authentication refers to the process to guarantee that an entity is who it claims to be or that information has not been changed by an unauthorized party. Authen- tication is classified by the security objective specific to a service, such as mes- sage authentication, entity authentication, key authentication, nonrepudiation, and access control. Message authentication assures the integrity and origin of the information. As synonyms of message authentication, data integrity preserves the information from unauthorized alteration, while data origin authentication assures the identity of the data originator; data origin authentication implies data integrity because the originator is no longer the source of the modified mes- sage. Entity authentication, also named endpoint authentication or identifica- tion, assures both the identity and the presence of the claimant at the time of the process. The timely verification of one’s identity is either mutual, when both parties—sender and receiver, for example—are confirmed with each other, or unilateral, if only one party is assured of the other’s identity. Key authentica- tion assures the linkage of an entity and its key(s), which extends to broader aspects of key management from key establishment/agreement, key distribution, key usage control, and the key life cycle. Key authentication plays a vital role in the Internet age when users cannot meet face-to-face to exchange keys orAuthentication in IoT  329 know each other personally to verify the keys. Trusted third parties step in as the certification authority (CA) responsible for vouching for the key’s authenticity, such as binding keys to distinct individuals, maintaining certificate usage, and revoking certifications 1. Nonrepudiation prevents an entity from denying its previous action; often, a trusted third party is needed to resolve a dispute due to an entity denying that it committed a certain action or no action. Access con- trol or authorization, following successful entity authentication, posts selective restrictions on an entity to use data/resources. To clear up the confusion in terms of authentication, this book classifies authentication by timeliness into two categories, from which the others can be derived: 1. Entity authentication in real time: Alice and Bob, both active in the com- munication, assure each other’s identity with no time delay. 2. Message authentication in an elastic time frame: Alice and Bob exchange messages with assurance of the integrity and the origin of the messages even at a later time. Traditionally (before the mid-1970s), authentication was intrinsically connected with secrecy. For example, password authentication during ancient wartime was kept as a shared secret, such as a word between parties; demonstrating the knowl- edge of this secret by revealing the word proved the corroboration of the entity’s identity and then granted the entity a pass into the territory. Fixed-password schemes, involving time-invariant passwords, are considered weak authentica- tion, subject to attacks by eavesdropping and exhaustive searching. Various tech- niques are applied to fixed-password schemes to strengthen secrecy. Instead of a clear text password, the password is encrypted to make it unintelligible or is salted/augmented with a random string to increase the complexity of dictionary attack. However, authentication does not require secrecy, as the discovery of hash functions and digital signatures showed. A hash function is a one-way function that maps a binary string of arbitrary length to a binary string of fixed length, called a hash value, which serves as a compact representative of the input string. Two features that make hash functions useful for authentication are 1. It is computationally infeasible to find two distinct inputs with the same hash values, that is, two colliding inputs x and y such that h(x)= h(y). 2. It is computationally infeasible, given a specific hash value v, to find an input x with the hash value v, that is, given v, to preimage x such that h(x)= v. Symmetric-key encryption is one-key cryptography with a shared secret key; asymmetric-key encryption is two-key cryptography with a pair of one public key and one private key; a hash function is unkeyed cryptography with no key.330  Security and Privacy in Internet of Things (IoTs) Hash functions may be used for data integrity to authenticate messages without keeping the secrecy of the messages. A typical process of data integrity with a hash function works as follows:  Alice computes the hash value corresponding to a message and then sends the message to Bob, along with its hash value.  Bob computes the hash value corresponding to the received message and compares his computed hash value with the extracted hash value. The comparison verifies if the message has been altered or not. If Eva altered the message en route, Bob would be able to detect the modifica- tion, thus preserving data integrity without the need to keep the message secret from Eva. Note the inability to find two inputs with the same hash value satis- fies the security requirement for data integrity. Otherwise, Eva would substitute another message with the same hash value to fool Bob from detecting modifica- tion. Keyed hash functions, encrypting hash values with a shared secret key, are named message authentication code (MAC) algorithms whose specific purpose is message authentication (data origin authentication as well as data integrity). Hash functions may also be used for digital signatures. A digital signature binds an entity’s identity to an information with a tag called the signature. A typical process is shown here:  Alice signs a long message by computing its hash value and then sends the message to Bob along with its hash value, usually encrypted as her signature.  Bob receives the message, computes its hash value, and verifies that the received signature matches the hash value. Note, again, that the noncollision property of hash functions prevents Alice from claiming later to have signed another message because the signature on one mes- sage would not be the same as that on another. In addition, it is not necessary to keep the message secret from Eva for the purpose of data signature, since the hash value, not the message itself, is encrypted to increase the strength of nonrepudiation. The third cryptographic use of hash functions is identification or entity authentication. Using a one-way (nonreversible) function of the shared key and the challenge, a claimant proves its knowledge of the shared key by providing a verifier with the hash value rather than the key, and the verifier can check if the delivered hash value matches the computed hash value to assure the claimant’s identity. The challenge is to prevent replay attacks. Though the terms identification and entity authentication are considered syn- onymously, they can be distinguished as identification only for a claimed (stated) identity, whereas entity authentication (or identity verification) is used to corrob- orate an identity. Likewise, a digital signature is closely related to entity authenti- cation, but it involves a variable message to be signed for nonrepudiation after theAuthentication in IoT  331 fact, while entity authentication uses a fixed message such as a claimed identity to grant/deny instant access with no lifetime. Parties in entity authentication:  Claimant (prover): An entity that declares its identity as a message, often in response to an earlier message as challenge–response protocols, to demonstrate that it is the genuine entity.  Verifier: Another entity that corroborates that the identity of the claimant is indeed as declared by checking the correctness of the message, thereby preventing impersonation.  Trusted third party: An entity that mediates between two parties to offer an identity verification service as a trusted authority. Objectives of entity authentication:  Conclusive: The outcome of entity authentication is either completion with acceptance of the claimant’s identity as authentic or termination as rejection.  Transferability: Identification is not transferable so as not to allow a ver- ifier reuse an identification exchange with a claimant to impersonate the claimant to a third party.  Impersonation: There is a negligible probability that any entity, other than the claimant, can play the role of the claimant to cause a verifier to pro- vide completion with acceptance of the claimant’s identity; that is, no entity can impersonate a claimant. Nonimpersonation remains true even if an adversary has participated in previous authentications with either or both the claimant and the verifier in multiple instances. Factors of entity authentication:  Something known: The claimant demonstrates the knowledge of a secret by such means as passwords, personal identification numbers (PINs), shared secret keys, or private keys.  Something possessed: The claimant typically presents a physical token functioning as a passport. Examples are magnetic-stripe cards, smart/IC cards, and smartphones to provide time-variant passwords.  Something inherent: The claimant provides the biometrics inherited in human physical characteristics and involuntary actions. Examples are fingerprints, retinal patterns, walking gait, and dynamic keyboard- ing characteristics. These techniques have now been extended beyond authentication of human individuals to device fingerprints.332  Security and Privacy in Internet of Things (IoTs) Levels of entity authentication:  Weak authentication: Entity authentication schemes are considered weak if previously unknown parties verify their identities without involving trusted third parties. Single-factor authentication may not be weak: a one- time password, for example, is viewed as unbreakable against eavesdrop- ping and later impersonation. A one-time password, as the “something known” factor, ensures that each password is used only once.  Strong authentication: Entity authentication techniques using at least two factors are called strong authentication. Challenge–response protocols are strong authentication, in which a claimant proves its identity to a verifier by demonstrating knowledge of a secret known to be associated with the claimant, without revealing the secret itself to the verifier during protocol execution. Since the claimant’s response to a time-variant chal- lenge depends on both the claimant’s secret (such as its private key) and the challenge (such as a random nonrepeating number called a nonce), two factors are used in the protocols.  Zero-knowledge (ZK) authentication: Authentication protocols based on zero knowledge do not reveal any partial information at execution. Sim- ple password schemes reveal the whole secret since, after a claimant gives a verifier the password, the verifier can impersonate the claimant by replaying the password. Challenge–response protocols improve this aspect by demonstrating knowledge of the secret in a time-variant man- ner without giving away the secret itself, so that the information is not directly reusable by an adversarial verifier. However, some partial infor- mation about the claimant’s secret has been revealed, making challenge– response protocols susceptible to chosen-text attacks. ZK protocols allow a claimant to demonstrate knowledge of a secret while revealing no infor- mation of use to a verifier for impersonation. Therefore, the claimant only proves the truth of an assertion, similar to an answer obtained from a trusted oracle. However, the ZK property does not guarantee that a pro- tocol is secure unless its attack problem is computationally hard. Properties of entity authentication that are of interest to users are  Reciprocity of identification: Both parties corroborate each other as mutual authentication, or one party corroborates the other as unilateral authentication. Some unilateral authentications, such as fixed-password schemes, are susceptible to an adversary posing as a verifier to capture a claimant’s password for replay attacks.  Computational efficiency: Computational complexity of an authentica- tion protocol.Authentication in IoT  333  Communicational efficiency: Communicational overhead of a protocol.  Third party: Entity authentication techniques may involve a third party between two parties wishing to communicate in a trusted manner.  Timeliness of involvement: The third party may stay online to provide authentication services in real time, such as the Kerberos protocol that distributes common symmetric keys to communicating parties for entity authentication. A CA often works offline to issue or revoke public-key certificates.  Nature of trust: The third party could be an untrusted directory service for distributing public-key certificates. The nature of trust required in a third party includes trusting the third party’s delivery of correct outcomes.  Nature of security guarantees: Examples are provable security and ZK properties.  Storage of secrets: This refers to where and how to store critical keying materials; examples are local disks, smart cards, or clouds in software or hardware. 13.2 Entity Authentication: Node Eviction in VANET Vehicular networking features high-speed mobility, short-lived connectivity, and infrastructureless networking, forming vehicular ad hoc networks (VANET). Figure 13.1 depicts a typical network architecture of VANET, where roadside units (RSUs) operate in two modes: infrastructure and ad hoc. RSUs, operat- ing in infrastructure mode, connect to network infrastructure such as the Inter- net or cellular networks for services provided by external components such as travel advertisement and electronic toll collection. An RSU will communicate with vehicles’ onboard units (OBUs) sporadically in ad hoc mode. OBUs also communicate among themselves in ad hoc mode. An OBU will contain OBD-II as a set of sensors to measure the vehicle’s own status such as its brake, GPS to identify its location, radar to detect other vehicles nearby, and transceiver (TRX) to communicate with RSUs and other vehicles. These components feed informa- tion to the codriver, a special-purpose computer, which monitors road safety and processes travel services. Thus, VANET is an exemplary IoT, with cars as some largest things to be connected on the IoT. Beyond faulty nodes, such as malfunctioning OBUs, hindering VANET performance with fatal consequences in safety applications, malicious nodes intentionally inject faulty messages into VANET with the potential of mas- sive destruction 2. It is of paramount importance to remove errant nodes from VANET immediately. Node-eviction schemes accompany authentication mechanisms in network security. Traditionally, a centralized CA, such as the334  Security and Privacy in Internet of Things (IoTs) TRX TRX Radar Services Ad hoc mode Codriver Infrastructure mode Wired GPS OBD-II OBE RSU OBE OBE OBE OBE OBE OBE VANET VANET VANET OBE OBE OBE OBE OBE OBE RSU RSU Figure 13.1: Network architecture of VANET. Motor Vehicle Registry, revokes an errant node’s certificate. However, the nature of VANET renders CA-based approaches ineffective. Current node-eviction schemes in VANET allow nodes to make decisions and take action against other errant nodes, both distributed and locally. Local node-eviction schemes can be classified into five categories. 1. Reputation: In the absence of a strong authentication infrastructure in VANET, simple node misbehavior could severely degrade VANET with catastrophic consequences. For example, a selfish node may flood fake congestion messages upstream, subverting traffic to clear its own way but possibly leading to a chain of accidents. As a security mechanism, an indi- vidual node forms/updates a reputation metric of other nodes with which it has interacted through its own direct observation and information pro- vided by other nodes. Individuals will disengage from nodes of which they have had bad experiences. Eventually, nodes with a bad reputation will be excluded from VANET. CORE is a typical collaborative reputation mech- anism that enforces nodes’ proper behavior to remain in a mobile ad hoc network 3. Reputation-based approaches are resilient from false detec- tion but respond to incidents slowly. 2. Vote: Raya et al. proposed a local eviction of attackers by voting evaluators (LEAVE) protocol 4. The CA collects accusations from different nodes that have witnessed a node’s misbehavior and, on reaching a threshold, revokes the node being accused. LEAVE augments thisAuthentication in IoT  335 infrastructure-based revocation protocol with a misbehaving detection sys- tem, enabling individual nodes to safeguard themselves. Vote schemes equip individuals with a rapid reaction and self-protection. However, vot- ing becomes an injustice when there exist more deceptive nodes than hon- est ones. 3. Suicide: To ensure the accountability of accusers, the suicide class allows a single node to unilaterally revoke another node at the cost of itself being revoked, known as karmic suicide 5. The motivation comes from nature, in that a bee stings, losing its life, to respond to a perceived threat against its hive. The karmic-suicide revocation scheme offers an incentive to the nodes which have committed suicide through a periodically available trust authority (TA) rewarding a node for its justified suicide by reinstating it back into VANET. Suicide schemes inherit the speedy revocation process of vote schemes while increasing accuracy. 4. Abstinence: At the extreme of reputation schemes, the abstinence class keeps its ratings of others to itself. On experiencing a bad node’s misbe- havior, the node takes a passive role of staying away from the bad node but provides no reporting, expecting other nodes to eventually remove the bad node from the network. Each node can take one of the three actions in a revocation process: abstain, vote, or commit suicide. Optimal revocations in ephemeral networks (OREN) is a game-theoretic framework for local revocation, based on reputation, which dynamically adapts its cost param- eters to guarantee a successful revocation in the most socially efficient manner 6. 5. Police: The police class is effective for revocation in transportation, but largely unexplored in VANET. A special vehicle, such as a police car, patrols the network of roads and revokes any misbehaving nodes immedi- ately on detection 7. This class is accurate, as the evidence is first hand, but its speed depends on the chance of a node being caught, though the eviction is made instantly. Various factors affect the performance of node-eviction schemes. The topology of roads, spread of RSUs, speed of vehicles, drivers’ behavior, and number of malicious nodes are just some examples. Using an agent-based approach, we simulate the node-eviction schemes described above. The choice of agent-based simulation is due to its richness in flexibility and emergence such as being able to model behaviors and goals of individual nodes such as mobility and scheme configuration. This is useful for the modeling of systems that are very complex, such as intelligent transportation systems that involve driver behaviors, vehicle speeds, and individual goals. We used the recursive porous agent simulation toolkit (Repast) as our agent simula- tion toolkit because of its platform independence, seamless GIS integration, huge learning resource base, user friendliness, and programmer control 8.336  Security and Privacy in Internet of Things (IoTs) The simulation scenario consists of a circular road setup in the grid, where vehicles at different speeds cycle around the road and communicate with one another or the RSU when in close proximity. The RSU relays information to the CA. The behavior of the system components is dependent on the scheme used. The node-eviction scheme and frequency of contact was implicit in our model. The frequency of contact refers to how often the nodes come into contact with each other and exchange messages. This has a significant impact on the per- formance of the scheme. The variance of the speed of the nodes and their initial locations influences the frequency of contact. In our simulation, we attempt to answer the question of whether the scheme will eventually separate the malicious nodes from the honest nodes between the two network classes and how long it will take for this to happen 9. Any node- eviction scheme should attempt to optimize the average time, risk, and utility measures under dynamic environment conditions. In our simulation, we study how the evaluation parameters change with respect to the percentage of malicious nodes present in the network. The total number of nodes used in the network was 60, one of which was a police node. We model node eviction process, as a set of states and transitions. Such a process eventually separates all nodes into two subnets: Subnet I and Subnet II. A node, which is good or bad, initially joins any of the two subnets by convenience. A state transition occurs when a node moves from Subnet I to Subnet II, or vice versa. As the birds of a feather eventually flock together, Subnet I or Subnet II will finally converge into the same kind of nodes, i.e., good or bad only in each subnet. The system is modeled as a network of who wants to receive messages from whom, controlled by certificates. Each node maintains a List of other nodes Valid Certificates (LVC). As predicted, the vote class performed the best in terms of average vulner- ability time, because every incident triggers segregation, and only half of the population is required to vote a node out by our setting the threshold at 0.5. The police class took second place, since it segregated a bad node once the police catches a node sending a rogue message. The time increases with the percentage of bad nodes because it takes time for the police to arrive in time. The absti- nence class performs the worst, since a bad node is moved to Subnet II only if all nodes remove it from their LVC. When the percentage of bad nodes increases, the time dips slightly since the probability of encountering a bad node is higher. Figure 13.2 depicts the time simulation results. Figures 13.3 and 13.4 summarize the accuracy simulation results. Accuracy was the best category, with the highest unity and lowest risk. Police and absti- nence displayed the same unity value of 1, insensitive to the percentage of bad nodes, because their actions depend on first-hand information. No false accusa- tion takes place; hence, good nodes are not mixed with bad nodes. The unity value of the vote class diminishes as the proportion of bad nodes reaches 0.5,Authentication in IoT  337 100,000 90,000 80,000 70,000 60,000 50,000 Vote 40,000 Police 30,000 Abstinence 20,000 10,000 0 0 0.2 0.4 0.6 0.8 1 % Bad Figure 13.2: Average vulnerability time. 1 0.9 0.8 0.7 0.6 0.5 Vote 0.4 Police 0.3 Abstinence 0.2 0.1 0 0 0.2 0.4 0.6 0.8 1 Bad (%) Figure 13.3: Average unity. 1 0.9 0.8 0.7 0.6 0.5 Vote 0.4 Police 0.3 Abstinence 0.2 0.1 0 0 0.2 0.4 0.6 0.8 1 Bad (%) Figure 13.4: Average risk. Time Unity Risk338  Security and Privacy in Internet of Things (IoTs) its threshold setting, since false accusations by bad nodes move good nodes to Subnet II. The police class poses the lowest risk among the three, because every detec- tion triggers a bad node being moved from Subnet I to Subnet II. At the end, good and bad nodes are largely segregated, with almost no risk. However, as the percentage of bad nodes increases, it becomes difficult for the single police node to catch all the bad nodes in time, as multiple bad nodes pop up simultaneously at different locations. It is also possible that the police never catch some bad nodes, which demonstrates a rise in risk. The vote class also poses a low risk when the percentage of bad nodes is low but, as the proportion increases beyond 0.5, its threshold setting, the risk rises suddenly due to two reasons: There are fewer good nodes to report and more false accusations by bad nodes; therefore, fewer good nodes remain. As the simulation reaches a state of equilibrium, almost all the nodes, good and bad, end up in Subnet II, returning to the image of the initial state. The abstinence class has the highest risk, since a bad node is moved out of Subnet I only when every other node abstains itself from it. Risk rises steadily as the percentage of bad nodes increases. At some points, the risk fluctuates, since a good node is removed from Subnet I. Notice that, well after the proportion of bad nodes reaches 0.5, the risk value of the abstinence class becomes lower than that of the vote class, due to more bad nodes than good ones distorting the truth. 13.3 Message Authentication: Content Delivery in VANET The core of VANET applications relies on providing drivers with timely accurate information, namely content delivery 10. However, VANET content delivery poses serious security threats such as confidentiality, integrity, and authenti- cation, due to the distributed, open, and mobile nature of VANET 11. Var- ious security mechanisms have been proposed; nevertheless, without common metrics to measure their effectiveness, consumer confidence cannot be assured, especially regarding critical road safety concerns 12. Unfortunately, security measurement is difficult 13 and different from other kinds of measurement such as level of service in transportation 14 or quality of service in wireless multimedia 15. We propose a security metric to measure the integrity level of security schemes for VANET content delivery, namely, an asymmetric profit-loss Markov (APLM) model 16. With a black-box approach, the model documents the inci- dents of detecting data corruptions as profits and those of accepting corrupted data as losses. We use a Markov chain to record how the system under assess- ment self-adjusts its behavior in reaction to profit and loss; where there are more loss states than profit ones, the system is asymmetric. We then presentAuthentication in IoT  339 how APLM directs the optimization of designing integrity schemes for VANET content delivery, measuring results on a normal VANET content delivery deploy- ing no integrity scheme and four integrity schemes: reputation, voting, voting on reputation (VOR), and random. When a VANET passes by an RSU, the OBUs on the vehicles deliver to the RSU the traffic status of the upstream road segment. The traffic status could be expressed in traffic density, that is, the number of equivalent passenger cars per mile, with a timestamp attached. Whenever the vehicles are in the vicinity of the RSU, their OBUs respond to repeated requests by the RSU for the traffic status of the upstream road segment. To focus the scope of this chapter, we do not consider other content deliveries such as RSUs exchanging messages for global information, OBUs communicating with each other to avoid car collision, or RSUs advising the OBUs on alternative routes. As shown in Figure 13.5, an RSU joins a VANET moving in its vicinity. The RSU then establishes concurrent transmission control protocol (TCP) connec- tions with the OBUs on selected vehicles in the VANET. Each OBU contains a subset of fragments of the content. The RSU repeatedly requests each of the OBUs for their fragments over the TCP connections until it successfully assem- bles all the fragments into the full content. During the process, the RSU decides which fragment to obtain next from which OBU. Our VANET content delivery application architecture possesses compelling features of scalability, extensibility, and flexibility. Similar to file distribution in peer-to-peer (P2P) architecture, our VANET content delivery self-scales with a bounded delivery time for any number of vehicles in the VANET. Its function- ality is extensible to other content deliveries among RSUs and OBUs in duplex directions. The architecture supports flexible applications from collision avoid- ance to travel efficiency. However, the architecture faces security challenges due to its distributed, open, and mobile nature as discussed previously. VANET VANET OBE OBE OBE OBE OBE OBE OBE OBE Figure 13.5: Architecture of VANET content delivery.340  Security and Privacy in Internet of Things (IoTs) We propose a new integrity scheme named voting on reputation for VANET (VOR4VANET). The scheme contains two stages: local reputation calculation, when an RSU assigns a rating to each of the OBUs based on its own evaluation of past transaction success with that OBU, followed by voting weighted by repu- tation, where a vote weighted by reputations among OBUs, instead of a majority vote, settles content discrepancy. Local reputation, shown below, is calculated with an exponential weighted moving average over past ratings at the completion of downloading all data frag- ments needed to assemble the content: R = (1−a)R +aM t t−1 where: R = 0 0 M = 1 if OBU delivers a good fragment or−1 if it delivers a bad fragment In this trial,a is in 0, 1, with the recommended value of 0.125. Voting weighted by reputation determines the correct version of a data frag- ment when its multiple copies from several OBUs carry different values. We adjust the mode calculation of the list with the reputations of the corresponding OBUs:  h h OBU= mode R incidences of F h where R s are reputations scaled up to nonnegative integers. For example, if an RSU receives duplicates of a data fragment from four OBUs, and only one of the four OBUs delivers a “good” fragment while the rest three send “bad” fragments, by a majority vote, the RSU would accept the “bad” fragment. When incorporating their reputations as listed in Table 13.1, the list would equate to 3Gs and 2Bs, resulting in a mode of G; therefore, the RSU would accept the “good” fragment. When a fresh VANET arrives in the vicinity of an RSU, the RSU checks its reputation base for all the OBUs on the vehicles in the VANET and chooses those OBUs of high reputation having the fragments to cover the entire content. The RSU then establishes concurrent TCP connections with the chosen OBUs and requests each for their fragments. As mentioned before, an OBU may not have the complete set of fragments to cover the entire content. With proper selection of OBUs, the RSU would receive all the fragments needed to assemble the content, Table 13.1 Majority vote vs. voting weighted by reputation H H H H OBU 1 2 3 4 h F B G B B B h R 1 3 0 1 GAuthentication in IoT  341 most of which would be duplicates. When a discrepancy occurs in the value of a particular fragment due to corruption in some OBUs, the RSU invokes the voting scheme to settle the matter. The verdict will be reached after the RSU receives all the fragments and assembles them into the full content. The RSU then updates its reputation base. If the content fails the integrity check, the RSU repeats its selection process and requests fragments again until either the content delivery succeeds or the VANET passes out of its vicinity. Our APLM model of content integrity metrics employs content hosts such as OBUs in VANET and content retrievers like RSUs. The APLM model is based on the idea that an effective integrity scheme would enable content retrievers to avoid “bad” content hosts and request “good” content hosts for fragments needed to assemble a particular content set. The distinct number of content retrievers obtaining at least one corrupted data fragment without detection is represented by each state. Therefore, the state space of Markov chain consists of(n+1) states for content retrievers, a value of 0 denoting that none of the content retrievers have accepted “bad” fragments, 1 if one of them possesses corrupted fragments, (etc.), and n if all of them possess “bad” fragments without them being detected and disregarded. State 0 indicates profit while all the other states indicate loss; this represents asymmetric profit-loss, since there are more loss states than profit states. The heuristic matches the Markov property that the next state depends only on the current state. Through black-box observation, the probabilities of states’ transitions can be obtained. In P, the Markov matrix (1), p denotes the i, j probability of transitioning from state i to state j, where the probability of transi- tioning from state i to each of all the states (itself inclusive) sums to 1, indicated by Constraint 2.   p ··· p 0, 0 0, n  . .  . . . . P= (13.1)   . . . p ··· p n, 0 n, n X p = 1 (13.2) i, j j Using the vector π to represent the probabilities of all the steady states, π i denotes the probability of the network being in state i. Assuming an ergodic prop- erty for this Markov process, Equations 13.3 and 13.4 hold true. We can derive the steady-state probabilities,π , by solving the linear system of Equation 13.4 i and any (n – 1) equations taken from Equation 13.3. πP=π (13.3) X π = 1 (13.4) i i Finding the steady-state probability vector π, we can then calculate the integrity score based on profit and loss as in Equation 13.5. The range of f(π)342  Security and Privacy in Internet of Things (IoTs) is −1, 1, where “−1” represents the worst, “1” the best, and “0” indicates the system in a state of equilibrium between good and bad. The first term computes profit obtained by remaining in State 0, π , normalized to 1 by its coefficient 0 g(0). The second term sums losses at the other states, π , normalized to−1 by i g(0). Equation 13.5 reflects the asymmetric feature, with only one state carrying profit while the remaining n states cause loss. n X f(π)= g(0)π − g(i)×π (13.5) 0 i i=1 APLM features a black-box approach to measure an integrity scheme with- out the need to examine its implementation in detail; it thereby offers feasibility to the measurement process and autonomy without exercising expertise often associated with white-box methods. By utilizing historical statistics recorded as profit and loss, APLM measures integrity levels of five scenarios: normal without deploying any integrity scheme, the two schemes adapted from P2P file distribution, our VOR4VANET, and a random scheme. Let the content hosts of APLM model denote OBUs in VANET and content retrievers for RSUs. We also demonstrate how APLM directs the design of our VOR4VANET. 1. Normal VANET content delivery: Under the VANET content delivery architecture illustrated in Figure 13.5, an RSU obtains data fragments from whichever OBUs possess them. Once the RSU receives all the content fragments, it assembles them and checks content integrity. If there is a corruption in a data fragment, which the RSU cannot detect during fragment transmission but will be able to discover only after download completion, the RSU repeats its requests to all OBUs in its vicinity for the missing fragments. Normally, an RSU tends to obtain data fragments from those OBUs that respond faster. 2. Reputation scheme on individual OBU: With the reputation scheme, an RSU maintains a local reputation base of all OBUs in a VANET that is passing by. The RSU chooses those OBUs at the top of the reputation list to request the data fragments it needs. Thereby, the level of content integrity increases at the cost of delaying delivery. The idea is borrowed from P2P file distribution, where a repu- tation base is usually maintained by a trusted central server. Our reputa- tion scheme allows individual RSUs to maintain their own reputation base locally, and doing so in such a distributed fashion lessens the bottleneck effect of centralized schemes. There are various ways for an RSU to rate each OBU based on the OBU’s past performance in delivering “good” or “bad” data fragments. We choose the dynamic reputation formula, given in the formula on page 338, which takes the exponential weighted movingAuthentication in IoT  343 average over past ratings to reflect the current status in the system by more recent measurements. 3. Voting scheme on data fragments: The voting scheme targets the problem which remains in the reputation scheme, where corruptions are detected after completion of download- ing all fragments. This severely reduces the efficiency of content delivery. Adapted from P2P file distribution, an RSU requests multiple copies of a data fragment from several highly reputable OBUs over concurrent TCP connections. When there is a discrepancy (n.b., not corruption) among copies, a majority vote takes place to determine which fragment to accept. Obviously, the voting scheme requires more processing overhead. Intu- itively, the voting scheme should outperform the reputation scheme in assuring content integrity and delivery efficiency. However, the results from our APLM model surprised us, as indicated by the next scheme, VOR4VANET, where a majority vote under bad influence yields a wrong result. This study demonstrates the effectiveness of our APLM model in directing the optimization of security scheme design. 4. VOR4VANET: Voting on reputation for VANET integrity (VOR4VANET) contains two stages: local reputation calculation and voting weighted by reputation. The first stage is the same as the reputation scheme on an individual OBU. The second stage differs from the voting scheme on data fragments presented above. Instead of taking a majority vote, VOR4VANET gives greater weight to a more reputable OBU in the voting. In those cases when there are more “bad” OBUs than “good” ones, a majority vote would yield the undesirable result of selecting a corrupted data fragment. Such a sit- uation may be corrected by incorporating reputation into the procedure, giving more reputable OBUs more weight in the voting. The experiments have confirmed our hypothesis. The APLM model in our VOR4VANET directly prevents such design fraud from using the voting scheme on data fragments. 5. Random OBU choice: In computer science/engineering when optimization relies on heuristics, randomness often works wonders such as caching replacement algorithms. We also propose a scheme to choose an OBU randomly. Out of all the OBUs in a VANET that is passing by, an RSU chooses OBUs at random to request data fragments. Such a scheme involves barely any overhead but improves normal VANET content delivery. Figure 13.6 shows a result of VANET simulation under a normal setting.344  Security and Privacy in Internet of Things (IoTs) Figure 13.6: VANET simulation. 13.4 Key Management: Physiological Key Agreement in WBAN Another application domain of the IoT is medical cyberphysical systems (MCPSs) that monitor/control patients’ physiological dynamics with embedded/ distributed computing processes and a wireless/wired communication network. MCPSs greatly impact the society with high-quality medical services and low- cost ubiquitous healthcare. The major component that integrates the physical world with cyberspace is the wireless body area network (WBAN) of medical sensors and actuators worn by or implanted into a patient. The life-critical nature of MCPSs mandates safe and effective system design. MCPSs must operate safely under malicious attacks. Authentication ensures that a medical device is what it claims to be and does what it claims to do; the first line of MCPS defense. Traditional authentication mechanisms, reliant on cryptography, are not appli- cable to MCPSs due to constraints on computing, communication, and energy resources. Recent innovations to secure mobile wireless sensor networks, with multisensor fusion to save power consumption, are not adequate. Despite these challenges, MCPSs present great opportunities, with the unique physical features of WBANs, for noncryptographic authentication and human-aided security. This chapter proposes an authentication framework for MCPSs. By studying medi- cal processes and investigating healthcare adversaries, the novel design crosses the boundary between the physical world and cyberspace. With uneven resource allocation, resource-scarce WBANs utilize no encryption for authentication. Evaluation of this authentication protocol shows promising aspects and ease of adaptability. An MCPS represents a physiologically closed-loop system, where an auto- matic controller continuously monitors the patient’s vital signs with sensors and administers medication as needed with the aid of an actuator such as an infusionAuthentication in IoT  345 Device User Sensor Physiological Workflow state Maintenance Update/ Physiological configure signals Alarm Patient Controller Caregiver Adjust Override Update/ Commands configure Infusion Actuator Figure 13.7: MCPS control loop. pump. Closed-loop control has been applied to the medical device industry, but mostly limited to stand-alone implants. For example, pacemakers deliver elec- trical impulses by battery-powered electrodes, often combined with defibrilla- tors, to regulate the heartbeat of cardiac patients without human intervention. Some clinical scenarios not based on threshold, however, need coordination of distributed medical devices. Due to patients’ different reactions to medi- cations, for instance, seizure detection is deemed ineffective with the current method of threshold-based brain oxygen monitoring. Therefore, physiologically closed-loop control relies on individualized patient modeling and also requires a fail-safe caregiver interface. Figure 13.7 depicts the control loop of a typ- ical MCPS. Boxes represent medical devices, and ovals denote MCPS users. Solid lines indicate the workflow, while dashed lines exemplify the maintenance procedure. A use case is patient-controlled analgesia (PCA), developed by a team at the University of Pennsylvania jointly with U.S. Food and Drug Admin- istration (FDA) researchers. PCA infusion pumps deliver opioid drugs for postsurgical pain management. A patient can adjust dosage rather than follow a schedule prescribed by a caregiver, because people react differently to the medications. However, overdose causes respiratory failure, leading to death. A PCA closed-loop system solves this safety issue. A pulse oximeter (sensor) continuously monitors two respiratory-related vital signs, heart rate (HR) and blood oxygen saturation (SpO2), and transmits the physiological signals to a con- troller. The controller, on detecting respiratory depression, commands the infu- sion pump (actuator), to stop dispensing the pain medication to the patient. The controller also sends an alarm to the caregivers, who have the ability to override346  Security and Privacy in Internet of Things (IoTs) the PCA, if an adverse event occurs. The maintenance procedure includes sen- sors/actuators updating their operational status to the controller and the controller configuring sensors/actuators 18. Authenticating medical devices in the physical world can avoid resource- intensive cryptography by taking advantages of human biometrics 17. We adopt a popular noncryptographic authentication scheme, called a physiologi- cal signal-based key agreement (PSKA) by Venkatasubramanian et al., to extend our authentication framework to the physical world. The framework is suitable for general WBANs, with any authentication scheme based on biometrics such as electrocardiograms (ECG). PSKA utilizes photoplethysmogram (PPG) signals to authenticate the sen- sors worn on a human body utilizing their shared physiological features. The random individuality and universal measurability that vary with time in such features ensure confidence to accept those sensors on the body, while reject- ing others not on the body. Therefore, PSKA effectively authenticates medical devices with the aid of a patient themselves, involving neither cryptography nor identification. PSKA also functions as key distribution to facilitate less computation- intensive symmetric cryptography. By utilizing a fuzzy-vault cryptographic primitive, a sensor locks/hides a secret in a construct called a vault using a set of values A. Another sensor, having only a small subset of values in common with Set A, can unlock/discover the secret. Sharing the same PPG signals, the sensors on the same body can reach agreement of a shared key. Thus, the PSKA provides the apparatus for confidentiality, in addition to authentication, for its communications in the physical world. We reclassify the on-body medical devices of MCPSs into two: the sen- sors/actuators as data devices (Ds) and the controller as a single information aggregator (A). Our authentication framework in the physical world contains three stages: physiological feature generation, noncryptographic/nonidentifier authentication, and key agreement. Figure 13.8 illustrates the PSKA process to exemplify our authentication framework 19. 1. Feature generation: All Ds and A obtain physiological signal-based features using the four steps below. (a) The Ds and A sample (PPG) signals at the same time at a specific rate, irrespective of the parts of the body from which the signals are coming. (b) The samples are divided into windows, on each of which a fast Fourier transform (FFT) is performed. (c) The peak in each FFT coefficient is detected. (d) Each of the peak index-value pairs is quantized into binary strings, which are concatenated to form a feature.

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.