Download PDF version of this article PDF

The Challenges of IoT, TLS, and Random Number Generators in the Real World

Bad random numbers are still with us and are proliferating in modern systems.

James P. Hughes, Whitfield Diffie

 

IoT (Internet of things) is now a first-class member of the Internet, communicating with cloud infrastructure. With this come additional requirements to ensure confidentiality, integrity, and authentication for every customer's data. The IETF TLS (Transport Layer Security) protocol is used for almost all Internet traffic security, but TLS is not as secure as the general public believes it to be. The current TLS protocol has been proven secure, but do IoT implementations live up to that promise? IoT does not always have the luxury of hardware RNGs (random number generators) or other features typically found on servers, laptops, or even phone processors. The history of RNGs that have not been as random as expected has led to this question.

TLS does not make things easy. It uses fragile constructions such as DSA (Digital Signature Algorithm), RSA (Rivest-Shamir-Adleman), and GCM (Galois/CounterMode), and the protocol itself fails in many ways if the random numbers are not perfectly random. NIST (National Institute of Standards and Technology) and others have created standards for building, testing, and standardizing RNGs. These standards have been implemented in open-source projects that have made these tools available to the community, but there can be issues with RNGs even when using the standardized open-source libraries. Programmers are not solely to blame.

Many in the cryptographic community scoff at the mistakes made in implementing RNGs. Many cryptographers and members of the IETF resist the call to make TLS more resilient to this class of failures. This article discusses the history, current state, and fragility of the TLS protocol, and it closes with an example of how to improve the protocol. The goal is not to suggest a solution but to start a dialog to make TLS more resilient by proving that the security of TLS without the assumption of perfect random numbers is possible.

 

A Well-established Standard

TLS is the secure communications protocol of choice for most applications communicating over the Internet. TLS is well established and well analyzed with proofs of security, and there are several interoperable open-source implementations. Choosing TLS is undoubtedly less risky than creating a proprietary cryptographic protocol.

IoT, however, is handicapped when it comes to implementing secure communications using TLS. IoT runs on lower-cost processors that lack processing power and many of the standard hardware security features found on larger processors. IoT traffic uses TLS, a fragile protocol that fails catastrophically if the random numbers are not perfectly random. IoT developers may not be willing or able to test that random numbers are random and properly handed to the TLS implementation. Prominent vendors have teams of full-time employees focused solely on these issues, but few, if any, IoT vendors can afford this luxury.

IoT vendors often leverage open-source TLS software, but that is not sufficient. A recent two-year survey of random numbers used by TLS traffic to and from the University of California Santa Cruz found an alarming number of implementations that offered their customers no security because of programming mistakes related to the generation and handling of random numbers.14

 

IoT processors

IoT needs low-cost and low-power processors compared with a typical desktop, server, or even phone processor. Low cost enables markets and cannot be ignored, but lower cost means lower performance and fewer features. These differences can significantly impact the TLS implementation. Lower power means that batteries last longer.

An IoT vendor choosing a lower-cost processor, however, does not excuse the vendor from implementing sufficient security.

Less-expensive processors can lack Security Enclaves and hardware-based TRNGs (true random number generators), functionality that the software needs to provide. Not having a Security Enclave means that the software must find methods to protect private keys and other secrets that an IoT system requires. Not having this protection can lead to problems if a device is lost, stolen, or thrown away. Less-expensive processors without TRNGs require the software to find entropy, measure the entropy, and ensure that it is reliably communicated to the TLS layer. Even if the processor has a TRNG, that entropy must be reliably communicated to the TLS layer.

These issues are solvable in software and must be solved perfectly to provide security for customers. There is no margin of error.

 

Fragile cryptographic protocols

An algorithm with more or stronger assumptions is considered more "fragile," even if these assumptions are not explicitly stated. An algorithm with fewer or weaker assumptions is considered more "robust." Many algorithms are fragile concerning random numbers, including DSA, ECDSA (Elliptic Curve Digital Signature Algorithm,5 RSA,15 and GCM.4 Even algorithms that are not fragile will create easily discovered keys if a CSPRNG (cryptographically secure pseudonumber generator) is not seeded with enough entropy, because any two TLS sessions can have identical random numbers for keys and nonces.14

You could hope that the vulnerable traffic would not be noticed, but the TLS protocol makes finding insecure implementations trivial because the protocol puts raw random numbers directly into the protocol. Seeing the same 32-byte number twice is a strong indication that the RNG was not properly seeded and the traffic from that implementation is not secure.

TLS makes finding multiple devices with the same implementations straightforward. Because so many options are available to TLS implementations, there is a vanishingly small probability that any two implementations will use the same set of options. The Zeek JA3 plug-in1 hashes the options to be able to fingerprint implementations reliably.

 

Testing

Testing standard software is straightforward because it will output the same answer if a program is run twice. The deterministic behavior of software goes out the door when random numbers are needed. Implementers need to test that the numbers are random and that these numbers are communicated to TLS.

NIST has standards for random number generation and unit testing.2,20 Unit testing the algorithms is critical but insufficient as a system test. Problems can still occur with seeding, how random numbers are passed into TLS, or even how TLS uses these numbers. System testing TLS requires that the actual wire protocol be analyzed to ensure the randomness of the ClientHello random value.

History

Insecure RNGs threaten the security of TLS. Intentionally insecure RNGs are typically used to create systems with a back door, also known as kleptography. Unintentionally insecure random numbers are caused by simple programming mistakes when implementing a known correct RNG or implementing a custom RNG that does not have perfect entropy.

Intentionally insecure RNGs have been discovered and publicized. A device produced by Crypto AG starting in the 1970s used a random number algorithm that was "created by the NSA, which could therefore decrypt any messages enciphered by the machine."19 More recently, "NSA's backdooring of Dual EC [random number generator] was part of an organized approach to weakening cryptographic standards."3 These systems have been shut down.

Unintentionally insecure RNGs may be worse than intentionally insecure ones. Insecure RNGs can make a product perform some sensitive function, allowing anyone who knows the flaw to recover the data. One example of an unintentionally insecure RNG is the use of bad random numbers to create RSA keys. When insecure implementations are found, the vendors may have gone out of business or, worse, are either unwilling or unable to fix their devices. Open-source projects may be abandoned by their original developers, but they continue to exist. Some organizations will intentionally continue making their insecure products available to an unsuspecting public.

One unintentionally insecure RNG that has affected IoT devices using RSA has been known for more than 10 years, yet it continues to grow. Beginning with an article in the New York Times in February 2012,18 a constant stream of academic papers has documented that RSA is fragile with less-than-perfect random numbers and that the situation is getting worse, not better:

Secure RNGs can be built. Many papers, standards, and open-source implementations of secure RNGs have been published, yet engineers continue to get it wrong. "In fact, it seems that engineers are not able to get it right and it became a serious problem in cryptology."7 When cryptographers hear that their theoretical proof of security gets implemented using insecure random numbers, their comment is typically tantamount to "not my problem," and they often denigrate engineers as sloppy.

Cryptographers use public random numbers to show "freshness." This ensures that the conversation does not replay old messages or ciphertext. Public random numbers make cryptographers' security proof easier, thus "passing the buck" to the engineers.

Consider Frederick Brooks's comments about software engineering in what he calls "The Woes of the Craft" in his 1975 classic, The Mythical Man-month:6

Not all is delight, however, and knowing the inherent woes makes it easier to bear them when they appear.

First, one must perform perfectly. The computer resembles the magic of legend in this respect, too. If one character, one pause, of the incantation is not strictly in proper form, the magic doesn't work. Human beings are not accustomed to being perfect, and few areas of human activity demand it. Adjusting to the requirement for perfection is, I think, the most difficult part of learning to program.

In many senses, cryptography is perceived as magic. Academic cryptographers can use perfect random numbers to create their incantations, but they should not blame the engineers when things go wrong. Practical cryptographers could acknowledge that not all programmers are perfect and adjust their protocols for the same or better security when the random numbers are perfect, while not spectacularly failing when the random numbers are imperfect.

 

Current State

Recently the focus has been on TLS's use of random numbers that are exposed by the protocol to answer the question about the quality of these random numbers.14 The cryptographers who designed SSL (Secure Sockets Layer) chose to use public nonces to ensure that the connection is fresh and not old traffic being replayed. SSL and now TLS use two 32-byte nonces called ClientHello and ServerHello random values. These numbers are sent in the clear because there is no reason to hide that the connection is fresh.

In a perfect world where random numbers are random, these numbers have no relationship to numbers that have been used in the past or future. In a perfect world, the probability of seeing the same ClientHello or ServerHello random value twice is vanishingly small and is based on the birthday problem,10 which states that the probability that there is no duplicate random number from a population H after we have seen n values is

Hughes equation 1

The probability that there at is least one duplicate random number from a population H after you have seen n values is simply the probability that the above did not happen.

Hughes equation 2

After seeing 1 billion Hello random values, the probability of finding a single number that is repeated is 1.85×10−50. A probability that starts with 50 zeros after the decimal point is as improbable as someone winning the UK National Lottery jackpot seven times in a row. We found 29,884 distinct Hello random values, each of which occurred two or more times. Most of the implementations were IoT-class devices, a few of which are summarized in this article.

You might wonder, "So what if these numbers that prove freshness are not perfectly random," but such a statement would forget that these public numbers come from the same generator as the key material. If the public random numbers are not perfectly random in the real world, the key material is not random since they come from the same generator. Any insight into the key material is a significant threat to the privacy of the data.

 

Stuck-at implementations

Implementations of RNGs that are seeded by constants exhibit a single random number for the ClientHello random value. These are referred to as "stuck-at" RNGs. These systems seed their RNGs at the beginning of each TLS session, but they do not add any entropy. They may have a perfect RNG, but they generate the same numbers repeatedly without any entropy. The values of the keys used and the public random numbers exposed by the protocol will be constant.

For example, if the ClientHello random value of

ad100cbcdb10a926acd41f7214d392887472dff54cbd720481b63e15

is detected, two things have probably happened: First, this is a device that is using the CyaSSL library.11 The seeding of the algorithm is shown in figure 1 (code copied directly from CyaSSL file cyassl/ctaocrypt/src/random.c). This code is repeated four times and used as a placeholder for four sets of hardware.

GenerateSeed from CyaSSL
FIG 1 GenerateSeed from CyaSSL

Second, the session keys are constant and easily recovered. In this case, the TLS session is not providing any security for the traffic. At least three TLS implementations were found to have this specific issue.

In the second example of an insecure TLS implementation, repeated values were found, as shown in table 1. The table of repeated values shows that the most common value repeating does not have trailing 0s, and that every possible number of trailing bytes being 0 has occurred in the data, including an all 0s value. Interestingly, this data is based on a stuck-at value, but the only randomness is how many bytes were copied into the TLS ClientHello random value. The worry is that the keys used by this implementation might also be constant or have a small number of possible values.

Table of repeated values
Table 1 Table of repeated values

Figure 2 shows the STS (Station-to-Station) protocol, which extends the basic D-H (Diffie-Hellman) protocol8 to authenticate the endpoints and prevent MITM (man in the middle) attacks.

Google Collisions Visualized
FIG 2 Google Collisions Visualized

 

Low entropy

The third example of an insecure TLS implementation is an IoT device that may be a Google Assistant based on the Google servers with which the device is communicating. The repeated random numbers are visualized in figure 3, showing the Google collisions from a four-hour subset of the data starting January 15, 2021, 6pm GMT. The figure shows one server (blue oval) and two client implementations (green ovals). These implementations duplicate random numbers (red ovals). This could be one or more devices with the same flaw.

Station to Station Authenticated Key Agreement Protocol
FIG 3 Station to Station Authenticated Key Agreement Protocol

Approximately 80 percent of the connections from this device are random numbers that are duplicated and most likely vulnerable. The actual device has not been determined, but Google has asked for help in finding and identifying this device.

 

TLS is a Fragile Cryptographic Protocol

TLS is at the heart of e-commerce and the vast majority of the secure communications on the Internet, but it comes up short when it comes to resiliency for less-than-perfect random numbers. The algorithms that make up TLS are fragile: DSA and ECDSA leak their key materials if the random numbers they use are even slightly not random.5 The alternative for DSA is RSA, but that also has problems when the random numbers are not perfect.17 The most heavily used encryption mode, GCM, fails if the symmetric key and nonce are repeated.4

These failures are real and can be catastrophic to individuals who use software or IoT devices with imperfect RNGs. These issues have been known for decades, and there have been discussions and proposals, but nothing has been done. TLS could take a page from traditional network security techniques and adopt an in-depth defense, but it seems like every time yet another bad RNG is discovered, the answer is to fix the RNG and not bother making the protocol more resilient to bad random numbers.

 

Implementation fingerprinting

TLS makes finding bad random numbers easy in several ways. First and foremost is the requirement for the ClientHello random value. TLS essentially exposes the implementation because the options are not encrypted and the number of combinations of options allows implementations to be fingerprinted with high fidelity by simply hashing the options.1

Determining the TLS implementation with high fidelity can help find malware command-and-control networks. It is also valuable for finding bad RNGs. Making bad RNGs discoverable is suitable for the security researchers looking for them, but it is also valuable to organizations that exploit these issues. No documents suggest that identifying implementations with high fidelity is a feature that TLS wants to provide.

 

Station-to-station protocol

Ever since the publication of New Directions in Cryptography8 in 1976, cryptographers have been designing protocols to solve the issues of freshness, making sure that MITM and replay attacks are not possible. TLS has used the public ClientHello random value since at least SSL 2.0 from 1995 to solve this problem, but at that time, it was not the only known method.

In 1992, years before the first version of SSL, one of the authors (Diffie) described a solution to the freshness problem in "Authentication and Authenticated Key Exchanges"9 which was built on secure telephony work going back to the late 1980s. In this design, random values are not sent in the clear. The protocol uses RNGs to create ephemeral private keys, exchanging the ephemeral public keys. The freshness of the ephemeral keys provides the freshness in the STS protocol.

The protocol starts with the system parameters, generator g, modulus, Alice's A, and Bob's B public signing keys sA and sB. Alice creates an ephemeral private key x and calculates ephemeral public key X = gx. Alice sends the ephemeral public key X to Bob. Bob then does the same, creating y. Bob now has the information necessary to calculate the shared secret K. Bob signs the value (Y,X) and then encrypts it with the shared secret K. Bob then sends Y and CB. Alice now has what she needs to calculate K and decrypt CB and verify Bob's signature. Alice now knows that she is talking to Bob. Alice now can calculate CA and send it to Bob. Finally, Bob verifies that he is talking to Alice by using K to decrypt CA and check Alice's signature, and the authenticated key exchange is complete.

 

Naxos–TLS

Figure 4 presents an alternative to the existing TLS protocol. The Naxos–TLS protocol extends both the STS and Naxos16 protocols to announce the certificate the client wants to connect to, exchange certificates, and options. The client certificate and client TLS options are transferred confidentially so that only A can know who is connecting, and passive monitoring cannot determine the implementation.

NAXOS–TLS Key Agreement Protocol
FIG 4 NAXOS–TLS Key Agreement Protocol

This protocol is not intended to be a complete proposal but a starting point to show that eliminating two of the vulnerabilities of TLS—the ClientHello random value and high-fidelity fingerprinting of clients—can be solved. There are additional countermeasures if the client's RNG does not have full entropy.

The protocol starts with STS and leverages the Naxos protocol16 by creating the session ephemeral key ekA ← {0,1}λ. Alice then hashes up the long-term secret skA with this session's ephemeral key ekA to form the session's private key x $/← H1(skA,ekA). Alice caches (the $/← symbol) x to use later.

Naxos has the advantage that, if the ephemeral key ekA does not have any entropy, the session loses PFS (perfect forward secrecy). Losing PFS is not good, but it is better than losing privacy. The possibility of losing PFS is less for servers than clients because our research found that servers tend to be larger, more capable systems that do not have the same issues with random numbers as IoT clients. Alice sends the session public key X = gX and the server's TLS options and certificate (optA,certA) to Bob.

Bob now has Alice's identity A and public key pkA. Similarly to Alice, Bob creates his session ephemeral key ekB ← {0,1}λ. Here is where the protocols begin to diverge. Bob creates and caches his session private key by y $/← H1(skB,ekB,X), adding Alice's public value X to the hash. Adding the value X to the hash does not increase the entropy of y since X is known to the attacker. However, if the entropy of ekB is lacking, adding X to the hash will mask that fact from the attacker. Bob sends the session public key Y = gy to the server.

TLS has the vulnerability that client implementations can be trivially fingerprinted, and if client certificates are used, the client's identity is discovered. To hide the client certificate, TLS currently performs a full unauthenticated key exchange (which leaks the implementations) followed by key renegotiation (complexity that is not needed). It would be simpler to encrypt these values. In Naxos–TLS, Bob calculates a key that is authenticated and private to Alice but still contains Bob's entropy and freshness guarantee.

 

K1 = H2(Xy,pkyA,A)

 

This key has the full entropy of the server and the client but does not authenticate Bob. Bob then sends his encrypted identity and TLS options to Alice.

 

C1 = eK1 (certB,optB)

 

The encryption should use a combined encryption authenticated mode that is not vulnerable to low entropy keys or nonces.4 The message C1 does not leak the implementation of the identity of Bob.

Upon receiving C1, only Alice can calculate

 

K1 = H2(Yx,YskB,A)

 

because she is the only other person who knows skB. Decrypting C1 gives Alice both the identity of Bob B and Bob's public key pkB. Alice knows that the connection is fresh because of the entropy of her ekA.

Bob calculated that the session key is

K = H2(Xy,XskB,pkyA,A,B)

 

Alice can also calculate the session key.

 

K = H2(Yx,pkxB,YskA,A,B)

 

The final two messages CA and CB prove that both Alice and Bob know K. The Naxos-TLS protocol is simpler than TLS, uses fewer algorithms, and is more tolerant of low-entropy random numbers, making it more robust and less fragile than TLS is today. The 2021 dissertation "BadRandom: The Effect and Mitigations for Low Entropy Random Numbers in TLS"14 provides more details about this protocol.

 

Conclusion

Bad random numbers are not a thing of the past; they are endemic and proliferating in today's deployed systems. Random numbers are complex to generate correctly, and the pressure that the IoT maker lives with does not make that any easier. The fragility of the TLS protocol to less-than-perfect random numbers is a flaw caused by cryptographers assuming that random numbers are easy to get right and taking neither the time nor effort to analyze, let alone accommodate, systems with less-than-perfect random numbers.

As shown here, TLS could be made significantly more secure in multiple ways: The raw random numbers are not needed for freshness and allow the RNG to be analyzed by adversaries. The non-goal of fingerprinting the implementation with high fidelity has no value to individuals relying on TLS and is invaluable for attackers to find insecure implementations. TLS does not accommodate less-than-perfect random numbers, and a simple change to add the entropy of the server to the protocol would provide some protection to clients if their random numbers are not perfect.

The optimist hopes this will lead to a discussion of how TLS can be more robust or consider mitigations for less-than-perfect client RNGs. TLS is an essential protocol that can be easier to use. Making TLS more robust will result in a higher level of security on the Internet.

 

References

1. Althouse, J. 2019. TLS fingerprinting with JA3 and JA3S. Salesforce Engineering; https://engineering.salesforce.com/tls-fingerprinting-with-ja3-and-ja3s-247362855967.

2. Barker, E.B., Kelsey, J.M., et al. 2007. Recommendation for random number generation using deterministic random bit generators (revised). U.S. Department of Commerce, National Institute of Standards and Technology; https://www.nist.gov/publications/recommendation-random-number-generation-using-deterministic-random-bit-generators-2.

3. Bernstein, D.J., Lange, T., Niederhagen, R. 2016. Dual EC: a standardized back door. In Lecture Notes in Computer Science Essays, The New Codebreakers, volume 9100, ed. P.Y.A. Ryan, D. Naccache, and J.-J. Quisquater, 256–281. Springer-Verlag; https://dl.acm.org/doi/abs/10.1007/978-3-662-49301-4_17.

4. Böck, H., Zauner, A., Devlin, S., Somorovsky, J., Jovanovic, P. 2016. Nonce-disrespecting adversaries: practical forgery attacks on GCM in TLS. In 10th Usenix Workshop on Offensive Technologies; https://www.usenix.org/conference/woot16/workshop-program/presentation/bock.

5. Breitner, J., Heninger, N. 2019. Biased nonce sense: lattice attacks against weak ECDSA signatures in cryptocurrencies. In 23rd International Conference on Financial Cryptography and Data Security, ed. I. Godberg and T. Moore, 3-20. Springer International; https://www.springerprofessional.de/en/biased-nonce-sense-lattice-attacks-against-weak-ecdsa-signatures/17265526.

6. Brooks Jr., F.P. 1995. The Mythical Man-month: Essays on Software Engineering. Addison-Wesley Professional.

7. Courtois, N.T., Hulme, D., Hussain, K., Gawinecki, J.A., Grajek, M. 2013. On bad randomness and cloning of contactless payment and building smart cards. In Proceedings of the IEEE Security and Privacy Workshops. IEEE, 105–110; https://dl.acm.org/doi/10.1109/SPW.2013.29.

8. Diffie, W., Hellman, M. 1976. New directions in cryptography. IEEE Transactions on Information Theory 22(6), 644–654; https://ee.stanford.edu/~hellman/publications/24.pdf.

9. Diffie, W., Van Oorschot, P.C. Wiener, M.J. 1992. Authentication and authenticated key exchanges. Designs, Codes and Cryptography 2(2), 107–125; https://dl.acm.org/doi/10.1007/BF00124891.

10. Flajolet, P., Odlyzko, A.M. 1989. Random mapping statistics. In Proceedings of the Workshop on the Theory and Application of Cryptographic Techniques, 329–354. Springer; https://dl.acm.org/doi/10.5555/111563.111596.

11. Garske, D. 2021. Deprecate CyaSSL library #151. GitHub; https://github.com/cyassl/cyassl/pull/151.

12. Hastings, M., Fried, J., Heninger, N. 2016. Weak keys remain widespread in network devices. In Proceedings of the Internet Measurement Conference, 49–63; https://dl.acm.org/doi/10.1145/2987443.2987486.

13. Heninger, N., Durumeric, Z., Wustrow, E., Halderman, J.A. 2012. Mining your Ps and Qs: detection of widespread weak keys in network devices. In Proceedings of the 21st Usenix Security Symposium, 35; https://dl.acm.org/doi/10.5555/2362793.2362828.

14. Hughes, J.P. 2021. BadRandom: the effect and mitigations for low entropy random numbers in TLS. Ph.D. dissertation. UC Santa Cruz; https://escholarship.org/uc/item/9528885m.

15. Kilgallin, J., Vasko, R. 2019. Factoring RSA keys in the IoT era. In First IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA), 184–189. IEEE; https://ieeexplore.ieee.org/document/9014350.

16. LaMacchia, B., Lauter, K., Mityagin, A. 2007. Stronger security of authenticated key exchange. In International Conference on Provable Security, 1-16. Springer; https://link.springer.com/chapter/10.1007/978-3-540-75670-5_1.

17. Lenstra, A.K., Hughes, J.P., Augier, M., Bos, J.W., Kleinjung, T., Wachter, C. 2012. Public keys. In Proceedings of the 32nd Annual Conference on Advances in Cryptology, 626–642. Springer; https://dl.acm.org/doi/10.1007/978-3-642-32009-5_37.

18. Markoff, J. 2012. Flaw found in an online encryption method. New York Times (January 14); https://www.nytimes.com/2012/02/15/technology/researchers-find-flaw-in-an-online-encryption-method.html.

19. Paul, J.D. 2021. The scandalous history of the last rotor cipher machine. IEEE Spectrum; https://spectrum.ieee.org/the-scandalous-history-of-the-last-rotor-cipher-machine.

20. Turan, M.S., Barker, E., Kelsey, J., McKay, K.A., Baish, M.L., Boyle, M., et al. 2018. Recommendation for the entropy sources used for random bit generation. NIST Special Publication 800-90B. U.S. Department of Commerce, National Institute of Standards and Technology; https://csrc.nist.gov/publications/detail/sp/800-90b/final.

 

James P. Hughes has had a career in storage, networking and cryptography and is the holder of more than 50 patents. He received his Ph.D from UC Santa Cruz in 2021.

Whitfield Diffie is best known for pioneering public-key cryptography in the early 1970s. Before his 1976 paper "New Directions in Cryptography," written with Martin Hellman, encryption technology was primarily the domain of government. Public-key cryptography and the Diffie-Hellman key negotiation protocol made cryptography scalable to the Internet and revolutionized the landscape of security. For this work, Diffie and Hellman shared the ACM Turing Award in 2015. Diffie is a member of the National Academy of Engineering, a Foreign Member of the Royal Society, and was inducted into the Cryptologic Hall of Honor of the U.S. National Security Agency in 2020.

Copyright © 2022 held by owner/author. Publication rights licensed to ACM.

acmqueue

Originally published in Queue vol. 20, no. 3
Comment on this article in the ACM Digital Library





More related articles:

Nicole Forsgren, Eirini Kalliamvakou, Abi Noda, Michaela Greiler, Brian Houck, Margaret-Anne Storey - DevEx in Action
DevEx (developer experience) is garnering increased attention at many software organizations as leaders seek to optimize software delivery amid the backdrop of fiscal tightening and transformational technologies such as AI. Intuitively, there is acceptance among technical leaders that good developer experience enables more effective software delivery and developer happiness. Yet, at many organizations, proposed initiatives and investments to improve DevEx struggle to get buy-in as business stakeholders question the value proposition of improvements.


João Varajão, António Trigo, Miguel Almeida - Low-code Development Productivity
This article aims to provide new insights on the subject by presenting the results of laboratory experiments carried out with code-based, low-code, and extreme low-code technologies to study differences in productivity. Low-code technologies have clearly shown higher levels of productivity, providing strong arguments for low-code to dominate the software development mainstream in the short/medium term. The article reports the procedure and protocols, results, limitations, and opportunities for future research.


Ivar Jacobson, Alistair Cockburn - Use Cases are Essential
While the software industry is a fast-paced and exciting world in which new tools, technologies, and techniques are constantly being developed to serve business and society, it is also forgetful. In its haste for fast-forward motion, it is subject to the whims of fashion and can forget or ignore proven solutions to some of the eternal problems that it faces. Use cases, first introduced in 1986 and popularized later, are one of those proven solutions.


Jorge A. Navas, Ashish Gehani - OCCAM-v2: Combining Static and Dynamic Analysis for Effective and Efficient Whole-program Specialization
OCCAM-v2 leverages scalable pointer analysis, value analysis, and dynamic analysis to create an effective and efficient tool for specializing LLVM bitcode. The extent of the code-size reduction achieved depends on the specific deployment configuration. Each application that is to be specialized is accompanied by a manifest that specifies concrete arguments that are known a priori, as well as a count of residual arguments that will be provided at runtime. The best case for partial evaluation occurs when the arguments are completely concretely specified. OCCAM-v2 uses a pointer analysis to devirtualize calls, allowing it to eliminate the entire body of functions that are not reachable by any direct calls.





© ACM, Inc. All Rights Reserved.