Some believe the emergence and proliferation of Artificial Intelligence (AI) represents humanity’s “fourth industrial revolution” and that it will drive evolutionary and revolutionary innovation — i.e., make us better at what we do (the things we know) and shape what we do in the future and how we do it (what has yet to be done).1 The breadth of AI possibilities is not easy to conceptualize, but there is great interest in understanding AI and how it can be effectively and responsibly leveraged.
In 2017, for example, the United States (U.S.) Congress issued a joint resolution that captured what could be fairly described through its title as the sentiment of most. The Fundamentally Understanding the Usability and Realistic Evolution of Artificial Intelligence Act of 2017 (“FUTURE Act”) in part, directed a study focused on better understanding of current AI applications, the potential of AI, its current and expected impacts across society, options for increased government support for AI development, and legal and policy shortfalls.2
More significantly, the 2019 National Defense and Authorization Act tasked the Department of Defense (DoD) to “establish a set of activities within the Department of Defense to coordinate the efforts of the Department to develop, mature, and transition artificial intelligence technologies into operational use.”3 Congress’s zeal for understanding AI and promoting AI-related activity is apparent, and so too is their recognition that effective incorporation of AI across our society will require significant funding and changes to our legal and policy framework.
With its speedy establishment of the Joint Artificial Intelligence Center (JAIC), the Army Artificial Intelligence Task Force, and a commitment of up to two billion dollars expected investment over the next five years, the DoD has demonstrated a healthy focus on current and future AI requirements.4 However, the mandate is now clear; additional funding and other institutional changes are required if the DoD intends to build a meaningful capacity for developing and fielding relevant AI applications. Despite the best of intentions, human attempts to place limits on AI and AI applications will be tested. As machine learning and AI capabilities compound, humans will need to be proficient in the design principles and development of responsible AI tools.
Artificial intelligence will manifest in almost every form because it holds promise for greater precision and capacity in almost every DoD task, from logistics, intelligence gathering and major weapon’s systems, to medical and legal services and personnel management. Artificial intelligence will undoubtedly make us faster; but it will also evolve rapidly in ways that will challenge the DoD’s models for funding, development, fielding, and use of new technologies.. So, where should this begin?
The DoD has already taken significant steps to promulgate clear principles to guide AI integration and use, but additional funding and acquisition tools that provide flexibility for the development, production, and implementation of AI applications and systems, and development of a competent AI workforce capable of competently participating in that process, is required for the DoD to meaningfully compete in the AI race.
The National Defense Authorization Act for Fiscal Year 2019
There was little concrete law or policy related to AI development and integration that the DoD could exploit to further the AI discipline prior to 2018, but the National Defense Authorization Act (NDAA) for Fiscal Year 2019 (FY 2019) contained significant authority and requirements for the DoD to explore, develop, and field AI capabilities across the force. Section 238, titled “Joint Artificial Intelligence Research, Development, and Transition Activities,” specifically tasked the Secretary of Defense to “apply artificial intelligence and machine learning solutions to operational problems and coordinate activities involving artificial intelligence and artificial intelligence enabled capabilities within the Department.” 5 The NDAA also required the Secretary to designate a senior official within the Department to aid in the following: lead all AI development activities, devise a DoD strategy, accelerate fielding of capabilities using every flexible acquisition authority available, develop AI capabilities for operational requirements through regular engagement with industry, experts and academia, build and maintain a competent workforce, leverage the private sector, and develop legal and ethical policies to govern AI development and employment.6
This senior official was also tasked with conducting a year-long study to review “advances in artificial intelligence, machine learning, and associated technologies relevant to the needs of the Department and Armed Forces, and the competitiveness of the Department in artificial intelligence, machine learning, and such technologies,”7 and to make recommendations for securing and growing the DoD’s technological advantage in AI, leveraging private technological advancements and commercial AI options, re-organizing the Department to meet AI requirements, training and educating an AI capable workforce, devising a framework for better funding for the DoD, and pursuingrequired changes to existing authorities that were “relat[ed] to artificial intelligence, machine learning, and associated technologies.”8
From a legal and policy standpoint, this was a watershed moment for the DoD. The NDAA requirements will prove a massive undertaking, but the mandate is clear and, if exploited, will further facilitate effective AI development and fielding. Despite the breadth of these requirements, meaningful compliance will chiefly hinge on four key factors:
- an ethical foundation for all AI development and use;
- increased funding so that the DoD and the U.S. can keep pace in the AI race;
- more acquisition flexibility for AI research and development and fielding; and
- work force reform focused on attracting, developing, and exploiting a capable AI work force.
Whether all four factors are completely achievable is unknown, but the DoD has taken some significant steps to advance the cause.
On 11 February 2019, the President released his AI strategy, which was intended to serve as a guidepost for government, industry, and academia in the great pursuit of AI capabilities.9 This so-called “American AI Initiative (Initiative)”10 is built around five “guiding principles” and six “strategic objectives”11 that are intended to foster a coordinated effort for AI development and fielding among the government, industry, academia, and to articulate the United States’ vision for leading the AI race in:
- development of technology across the “Federal Government, industry, and academia;”
- adoption of standards and the reduction of “barriers to safe testing and deployment of AI technologies” to promote growth of AI industry and their use of AI;
- development of an AI-competent workforce;
- protection of “civil liberties, privacy, and American values;” and
- setting conditions internationally that “support[ ] American AI research and innovation . . . , markets for American AI industries,” and the protection of our AI advantage and capabilities from “acquisition by strategic competitors and adversarial nations.”12
Similar to current efforts in the DoD to attract cyber professionals, the Initiative highlights direct commissioning of AI talent as a priority program—which, if implemented and exploited, could attract some significant talent into the AI ranks.13 The Initiative also tasks the Office of Management and Budget to issue agency-informed guidance for regulating AI in ways that protects innovation, civil liberties and American values, and access to AI technology,14 which provides a window of opportunity for the DoD and other agencies to shape required changes to the regulatory framework that could hamper effective AI integration.
The Department of Defense quickly followed suit on 12 February 2019 and released its own strategy (DoD Strategy) to articulate, in part, its commitment to “lead [the] responsible use and development of AI” and its “vision and guiding principles for using AI in a lawful and ethical manner.”15 Their strategic approach for development and fielding of AI capabilities focuses on:
- rapid, responsible fielding of AI capabilities for key missions;
- decentralized development and experimentation, and scalability across the force;
- development of a “leading AI workforce” through focused partnering, training for existing employees, and recruitment;
- partnering with industry, academia and international allies and partners, to address “global challenges of significant societal importance,” ensuring appreciation of defense challenges and investment in AI research and development, and training and development of the next generation of AI talent; and
- responsible leadership in “military ethics and AI safety” through, in part, development of standards for testing and verification of reliable systems, and development of AI applications focused on reducing collateral damage and harm to civilians on the battlefield.16
Consistent with the requirements of the NDAA, the DoD Strategy highlights the JAIC as the “focal point of the DoD AI Strategy” and tasks them to deliver AI solutions for key missions; foster focused research and development; manage scalability of AI applications across the DoD; set data use and acquisition standards; lead AI planning efforts, governance, ethics, and coordination; and develop and maintain an AI-capable workforce through recruitment and training.17
Both the Initiative and the DoD Strategy offer more than a glimpse into U.S. and DoD intentions for the development, integration and fielding of AI. It also offers worthwhile, responsible policy decisions on some of the obvious concerns that many have at the mere mention of AI. For example, like the FY 2019 NDAA, both the Initiative and DoD Strategy highlight ethics as a critical component of AI development and employment. Adopting and institutionalizing an ethical framework for all AI initiatives is vital to the DoD’s continued compliance with domestic and international legal obligations and preservation of trust with industry, academia, and other enablers necessary for the DoD to compete effectively.
“[T]he inclusion of artificial intelligence ethics and safety in the NDAA is the first step for the United States to become a global worldwide leader in AI ethics and governance.”18 Thanks to the NDAA, fostering and articulating an ethical foundation in the DoD for AI integration is now required by law. This is not novel to the DoD, and makes sense for a number of other reasons. In 2012, the DoD issued guidance requiring “autonomous and semi-autonomous weapon systems [ ] be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”19 Weapons represent one small piece of AI’s potential, but they are not the only types of AI applications that may worry skeptics. The 2012 policy was an important first step to address “killer-robot” concerns the public or the DoD enablers harbored.
That policy statement, however, was certainly not the cure-all to conflict with important industry partners. In 2018, Google decided to forego renewal of a contract with the DoD for its Project Raven venture—a project designed to use AI to analyze full motion video for use in any number of applications, including lethal targeting.
About 4,000 Google employees signed a petition demanding ‘a clear policy stating that neither Google nor its contractors will ever build warfare technology.’ A handful of employees also resigned in protest, while some were openly advocating the company to cancel the Maven contract.20
Other contractors and academic institutions could, obviously, follow Google’s path and there is probably little the DoD could do to change their course. Despite this, and the negative outcome for the DoD, “[t]he Maven episode represents a rare role reversal for a contractor and the Pentagon, with the Defense Department being more open—or at least consistent—in their messaging than the contractor they were paying.”21 This is an important posture the DoD has to maintain to ensure future credibility with all stakeholders in AI development.
To advance the ethical cause, and consistent with the FY2019 NDAA requirements, President’s Initiative, and the DoD Strategy, the DoD tasked the Defense Innovation Board (DIB) to devise a list of ethical principles for the use of artificial intelligence “to guide a military whose interest in AI is accelerating . . . and to reassure potential partners . . . about how their products will be used.”22 This ethical transparency is critical and must extend as well to the data and algorithms used to prevent, to the greatest extent possible, biased AI systems. The most pervasive aspect of AI is machine learning, which applies algorithms to data sets that then learn from that data.23
The real safety question . . . is that if [ ] [the DoD] give[s] these [AI] systems biased data, they will be biased.”24 The same holds true for the algorithms used. In fact, “[s]ome experts warn that algorithmic bias is already pervasive is many industries, and that almost no one is making an effort to identify or correct it.25
In the DoD context, employing biased systems could not only be lethal,26 whether via other forms of targeting, intelligence gathering, and even employment actions within the Department, but also lead to unintended violations of civil liberties and other legal obligations. On 21 February 2020, the DoD adopted the following five principles, consistent with the DIB’s recommendations, “for the design, development, deployment, and use of AI capabilities:
- Responsible: DoD personnel will exercise appropriate levels of human judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.
- Equitable: The Department will take deliberate steps to minimize unintended bia in AI capabilities.
- Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
- Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.
- Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.”27
Addressing these ethical aspects of AI and forcing an ethical transparency in the development and employment of AI systems is critical for setting expectations within the Department, ensuring development and use of certain systems remains within acceptable boundaries, and protecting continued access to private sector resources that will be critical for the DoD to remain competitive in the AI realm.
Institutionalizing these ethical principles will not be easy, and the DoD’s steadfastness will likely be challenged significantly as AI capabilities expand around the world—most notably from peer and near-peer competitors like China and Russia. For example, the United States and other western countries have exercised a healthy degree of transparency and advertised their policies on certain aspects of AI, particularly the possibility of future autonomous weapons but, Russia has not, and remains generally closed off from the discussion. Likewise, China has been obscure in their public position and may have “fewer moral qualms about developing lethal autonomous weapons systems.”28 The lack of any public position from China and Russia leads some to think we “could very well be at the starting blocks” of an “autonomous weapons race,”29 which could significantly test the United States and DoD’s current blueprint.. Nonetheless, there is value in the effort. Articulating a strong ethical position on autonomous weapons, and other potentially controversial AI applications, will serve as a vital backstop in AI development and use, and will also protect the DoD and the United States from going down objectionable paths that could alienate critical AI enablers.
The United States and China currently outpace the rest of the world in AI investment by a considerable margin. At current prospective rates of investment, however, China could own roughly half of the expected worldwide investment—a whopping $15.7 trillion—in AI technologies over the next decade. By 2030 they aspire to reach over $150 billion in government AI investment which would, in their view, place them as the world’s leader in AI technologies.30 China currently outpaces the United States in other AI metrics, such as patent applications, research, and scholarly papers.31 They are also exploiting “lower barriers to data collection” and building massive sets of training data for AI applications that have the potential to grow to 30 perecent ownership of all worldwide data by 2030.32 Whether their efforts ultimately translate into more patents and products remains to be seen, but the operating space China enjoys, with massive amounts of available data, fewer restrictions on the use of that data, supportive laws, and an innovative, start-up culture, should be a warning sign for U.S. policymakers and appropriators.33
One good example of China’s ambition to grow is their zeal for big data, exhibited in part through their plan to add 400 million surveillance cameras to the 170 million that currently exist across the country.34 In 2017, the DoD reportedly spent $7.4 billion on AI compared with China’s total investment of $12 billion. China, however, has plans to increase that budget to $20 billion by 2020.35
For the DoD, current AI funding levels are questionably inadequate. In fact, while China and Russia have exponentially expanded their investments in AI technologies, the United States has remained relatively stagnant.36 There has been much publicity over the Defense Advanced Research Projects Agency’s (DARPA’s) pledge to spend $2 billion over the next five years and an additional unplanned $1.75 billion for the JAIC, butthe DoD’s total investment remains elusive.37 Then-Undersecretary of Defense Patrick Shanahan confirmed as much in October 2018 when he indicated that the Pentagon does not know how much it is spending on AI because “it’s such a broad definition,” which should be step one for the JAIC as it attacks the FY 2019 NDAA requirements to define AI and craft a plan for coordinating all AI activities in the Department.38 Distributing AI investment outside of clearly labeled AI programs is untenable for the DoD, especially when credibility is such a key component to continued support from Congress, the American people, and private institutions and enablers. The FY 2019 NDAA mandate to the DoD is clear, and provides them the space to work with enablers to define short and long term requirements, budget appropriately, and advocate for necessary AI-specific funding lines and resources.
AI Development and Acquisition
A common theme across the FY 2019 NDAA, the President’s Initiative, and the DoD Strategy is the requirement for speed in the development and fielding of AI capabilities. The fundamental question, though, is whether current acquisition tools and authorities are sufficient to meet that requirement or—as the FY 2019 NDAA recognizes—whether further tailored fixes are required for AI acquisition? “Challenges persist, in part, because decades of legislation and policy initiatives that governed, and often attempted to reform, the acquisition system continue to rely on unique terms, conditions, and processes better suited to the industrial age, not the information age, much less the rapidly approaching artificial intelligence age.”39
In January 2019, the DoD’s so-called Section 809 Panel (the Panel) concluded its nearly two-year effort to help transition DoD acquisition “to a more streamlined, agile system able to evolve in sync with the speed of technology innovation.”40 The Panel arose from a FY 2016 NDAA (Section 809) requirement tasking the DoD to convene experts to study the acquisition system and make recommendations for streamlining processes while still protecting the DoD’s technological advantage.41 The panel’s work was extensive, resulting in a number of worthwhile administrative and substantive recommendations.42 Congress, likewise, has been active in acquisition reform and, from 2016 to 2018, passed an average of eighty-two provisions each year related to acquisition compared toan average of forty-seven provisions per year over the preceding decade.43
Section 8 of the 2016 National Defense Authorization Act,44 and the DoD’s statutory “other transaction authority (OTA),”45 are exceptions available to rapidly develop, fund, and field AI capabilities. As exceptions, however, they serve to highlight a core conflict—i.e., our law and policy remain anchored to the ideal of competitive acquisition processes. Competition drives innovation which, in theory, gets us the best products.46 Competition also promotes the worthwhile goal of socioeconomic development across various sectors through government spending.47 Sections 804 and 806 of the 2016 NDAA are focused authorities that “permit[ ] rapid acquisition and rapid fielding for middle tier programs intended to be completed in two to five years, and . . .allow [ ] the Secretary of Defense to waive any provision of acquisition law or regulation if the acquisition of the capability is in the vital national security interest of the United States.”48 Section 804 is limited to projects lasting two to five years in duration and focuses on rapid prototyping and fielding. Rapid prototypes under Section 804 need to be operationally capable within a five year window. Rapid fielding requires no more than six months to initial production and five years to fielding.49 Flexible authority, no doubt, but considering this limited scope, Section 804 does not provide the strongest of foundations for developing and fielding long-term and enduring revolutionary and evolutionary applications across the DoD’s footprint.
Section 806 expands the DoD’s OTA flexibility for prototyping and production. These OTA transactions provide a tangible alternative to traditional acquisition models and have proven a valuable tool for both developing and fielding AI applications across the force.50 Thus, OTAs provide a streamlined option for AI prototyping and development, namely because there is no prescribed format or other requirement for instruments or processes used; they are flexible and can be sole-sourced or competed. They can also be used for acquisition of final products after prototyping.51 “From FY 2016 through FY 2018, the combined total estimated [potential] value for [ ] [OTAs] was around $40 billion . . . [with] only 10 percent of that value, or about $4.2 billion [ ] spent.”52
Some, including the Panel, argue OTAs should be “embraced and expanded” for AI development and fielding,53 citing the NDAA provisions expanding OTA as indicative of Congress’s permissiveness.54 There is merit to that argument, particularly in the short-run, but OTAs are not the institutional cure-all for the bureaucracy and inefficiency that plagues our current acquisition workforce and processes.55 In fact, as of 2016, OTAs remained a less-than-favored option across the DoD and most federal agencies for a number of reasons. Chiefly, as a recent Congressional Research Service study noted, because many intra-agency policies require justification for their use—even when technically not required.56 The lack of any prescribed format or other guidance for executing OTAs makes them more challenging to process than traditional Federal Acquisition Regulation-based contract options for government acquisition personnel which, in the recent past, directly contributed to their underutilization. Unfortunately, this highlights a significant competence gap across the federal acquisition workforce.57 Moreover, OTAs can require OSD-level approval—which dilutes some of the claimed efficiency—and regular notification to Congress—which implies a certain uneasiness with deviations from competition.58 Other transaction authorities also carry risk, namely with “transparency and accountability,”59 in the process and run somewhat counter to the socioeconomic goals achieved through competitive acquisition procedures. None of this is to suggest OTAs are bad and should not be exploited. But, with elevated approval levels and oversight, lack of transparency, and exemption from competition requirements, OTAs alone are likely not sufficient to meet DoD requirements. There is a balance between speed and competition the DoD can adopt that protects the integrity of the process and supports acquisition of the best possible products. Thus, OTA authority could be modified to restructure approval levels and oversight, include provisions favoring or requiring competition, albeit streamlined, and add reasonable levels of internal checks to ensure transparency. Another, at least partial, solution would be to modify the Competition in Contracting Act and Federal Acquisition Regulation Part 6 requirements by stripping out time or other constraints that could serve to stymie speedy acquisition.60
Another significant change in recent NDAAs is found in Section 879 of the FY 2017 NDAA, which authorized the Commercial Solutions Opening (CSO) pilot program, giving the DoD authority to use streamlined acquisition procedures for commercial technology contracts valued up to $100 million and award within 60 days. The program was based on processes used successfully “by the Defense Innovation Unit (DIU) and Defense Information Systems Agency in using broad agency announcements (BAAs) to solicit technical proposals.61 One example of DIU’s success has been Project Maven, where they were able to award contracts within a matter of weeks using competitive procedures.62 With its $100 million cap, the CSO pilot program—like Section 804 authority and OTAs—has limited applicability and—as the Section 809 Panel noted—“[i]t is still too early to comment on the current DoD initiatives that are designed to experiment with these new or expanded authorities.”63
Transparency with Congress will remain critical regardless of which processes the DoD uses. The Panel specifically advised the DoD to maintain full transparency in its use of these authorities to protect against congressional backlash and to improve these programs through lessons learned.64 Given the call of the FY 2019 NDAA AI provisions, and the technology focus of structured changes to acquisition authority over the last four years, the DoD seems in position to significantly influence AI acquisition solutions with Congress in the coming years, andthey should exploit the opportunity to offer focused, realistic recommendations for future legislation.
Maintaining flexibility in approaches available to develop and field AI-capabilities is, no doubt, important, but until the DoD builds an acquisition workforce comfortable exploiting that flexibility, they will likely not realize the full benefit of the latitude granted by Congress. “In an era of great power competition centered on emerging technologies and how militaries adapt to them, human capital inefficiency is a strategic risk.”65 The NDAA, President’s Initiative, and DoD Strategy obviously recognize this risk, and the DoD has the opportunity to shape future decisions and authorities regarding the organization of the Department and its AI capable workforce.66
Building the right DoD workforce to develop, field, and use AI applications will be critical to effective and responsible employment of AI capabilities because outsourcing options are not likely to be universally suitable for AI applications for a few significant reasons. First, much of the work involved in getting these machines to learn—like the data sifting and feeding—can be inherently governmental, which greatly limits options for contracted support.67 Inherently governmental functions are those “so intimately related to the public interest as to require performance by federal government employees.”68 Authority to perform inherently governmental functions flows from the Appointments Clause of the constitution through the executive. Despite persistent debate regarding the scope of inherently governmental functionsthis is not a restraint easily remedied through regulatory or statutory change.69 The Panel highlighted “critical functions” as another potential limitation that could impact outsourcing options. Critical functions are those “…necessary to the agency being able to effectively perform and maintain control of its mission and operations.”70 These critical functions are not necessarily inherently governmental, but the Panel cautioned the DoD to determine which need to be performed by DoD employees and to “ensure [DoD employees] have appropriate training, experience, and expertise to understand the agency’s requirements, formulate alternatives, manage work product, and monitor any contractors used to support the federal workforce.”71
Second, there is no doubt that we have willing partners in industry; but, the Project Maven experience with Google72 serves to highlight the real friction and negative impacts that can arise with certain types of development and reinforces the notion that internal expertise will be essential to ensure the DoD maintains adequate momentum in the development and fielding of AI applications.
Third, security for AI applications will be paramount. The DoD obviously maintains significant leverage over contractors in matters related to security, but undeniably loses some level of control over those things they outsource. In the AI world, algorithms that drive machine learning are still very fragile and vulnerable to manipulation, which—depending on the application—could have catastrophic and very lethal consequences.73 The DoD could never effectively internalize everything, nor should they, because the private sector will drive AI innovation. There is an imperative need, however, in maintaining a capable internal capacity for those things too risky to outsource and to serve as a competent check and balance for AI development and acquisition.
Without significant training investment, the DoD’s current civilian workforce will not be able to keep up with the speed, precision, and expertise AI development and acquisition will require, and, due to the nature of uniformed service, only a small number of military personnel will likely have any long-term impact on AI innovation. Training and recruitment of an AI workforce will need to maintain pace with innovation, which will require radical change across the various levels of our labor and employment authorities. Incentivizing a long-term, capable, workforce will require additional tools—like competitive, adaptive pay structures, faster, more responsive hiring and firing authority, and exceptions from union coverage and rules—to attract and retain AI talent. The concept is not overly radical, and could be easily addressed with a few focused statutory and policy changes to labor-management relations rules (union)74 and the GS classification and pay framework.75 And, there is relevant precedent. The DoD has implemented a pilot of the Acquisition Demonstration Project (AcqDemo), a performance-based incentive program for acquisition personnel. The program provides incentive and pay flexibility not found in the GS classification system, but over time tends to even out with the GS levels of pay.76 While a similar system applied across the civilian workforce could buttress recruitment, it would be flat on retention, particularlyconsidering the DoD will need to compete for talent against high-paying technology giants, where median pay can and does far exceed the highest levels of GS compensation.77 The President has some authority to exclude, and has excluded, many agencies and subdivisions from labor relations rules (union) coverage andcan, and has, adjusted pay rates within the statutory pay grades.78 Applying the same focus to an AI workforce—and tailoring relevant statutory and policy changes to create an incentive heavy, at-will-like system to hire, fire, and pay for talent—is required if the DoD wants to maintain meaningful internal capacity for driving AI development and fielding.
Another hindrance to workforce development is the DoD’s current byzantine hiring process, described by current Secretary of Defense and former Secretary of the Army Mark Esper as “a fundamentally flawed system.”79 In the competition for talent, the DoD will be greatly disadvantaged without radical change. Whereas a technology firm could realistically bring a new hire onboard in a matter of days, the DoD is not so fortunate, averaging a reported 100 days for new hires. Additional administrative burdens, like the paperwork burden for a clearance background investigation, can also serve to drive potential candidates away. The DoD has reportedly committed to reducing hiring timelines to no more than 80 days. Secretary Esper does not think that is ambitious enough and is targeting a process to support a thirty-to-forty-five-day hiring window. He has also advocated for transfer of control for all DoD civilian employees from the Office of Personnel Management to the DoD. Whether sufficient to attract the AI software engineer who has the private sector option to start on Monday is yet to be seen, but Secretary Esper is right to push an aggressive approach for reforming the system.80
Not long before he passed away, Stephen Hawking warned that “[s]uccess in creating effective AI, could be the biggest event in the history of our civilization. Or the worst.”81 AI is here, and will proliferate rapidly. Congress and the President have given the DoD some daunting tasks, but also an effective roadmap to get where they need to be—i.e., understand, control, field, and develop ethical but effective AI and maintain dominance and leadership in the AI realm. Achieving those tasks will require significant changes in the way the DoD does business, both internally and with those critical enablers across industry, academia, and the international community. Of course, these proposed reforms could be similarly applied across the spectrum of the DoD’s technological challenges (e.g., cyber), but sweeping change, at least in the relative short-run, is far less likely to succeed. Harnessing the collective talent required to increase the speed, flexibility, and precision of responsible AI development and integration needs a focused effort. The DoD has much work to do, but they have an open door to set conditions for continued relevance in the AI world. TAL
1. Scot Schultz, Artificial Intelligence: The Next Industrial Revolution, Inside HPC (Feb. 5, 2018), https://insidehpc.com/2018/02/artificial-intelligence-industrial-revolution/; Nick Ismael, Information Age, Blurring the Lines: the evolution of artificial intelligence, Information Age (Jan. 22, 2018), https://www.information-age.com/evolution-artificial-intelligence-123470456/.
2. Ali Breland, Lawmakers Introduce Bipartisan AI Legislation, The Hill (Dec. 12, 2017) https://thehill.com/policy/technology/364482-lawmakers-introduce-bipartisan-ai-legislation; See also Fundamentally Understanding the Usability and Realistic Evolution of Artificial Intelligence Act, H.R. 4625, 115th Cong. § 1 (2017).
3. National Defense Authorization Act for Fiscal Year 2019, Pub. L. No. 115-232, § 238, 132 Stat. 1636 (2018) [hereinafter NDAA § 238].
4. Drew Harwell, Defense Department Pledges Billions Toward Artificial Intelligence Research, Wash. Post (Sept. 7, 2018), https://www.washingtonpost.com/technology/2018/09/07/defense-department-pledges-billions-toward-artificial-intelligence-research/?utm_term=.863205257391; Jade Leung and Sophie-Charlotte Fischer, JAIC: Pentagon Debuts Artificial Intelligence Hub, The Bulletin (Aug. 8, 2018), https://thebulletin.org/2018/08/jaic-pentagon-debuts-artificial-intelligence-hub/; U.S. Dep’t of the Army, Dir. 2018-18, Army Artificial Intelligence Task Force in Support of the Department of Defense, (2 Oct. 2018) [hereinafter Army Dir. 2018-18].
5. NDAA § 238, supra note 3, § 238 (a)(2).
6. Id. § 238 (b-c).
7. Id. § 238 (e)(3)(A).
8. Id. § 238 (e)(3)(B-E).
9. Exec. Order No. 13859, 84 Fed. Reg. 31 (Feb. 14, 2019) [hereinafter Exec. Order].
13. Id. National Defense Authorization Act for Fiscal Year 2018, Pub. L. No. 115-91, § 512, 131 Stat. 1283 (2017) [hereinafter NDAA § 512].
14. Exec. Order, supra note 9.
15., Summary of the 2018 Department of Defense Artificial Intelligence Strategy: Harnessing AI to Advance Our Security and Prosperity, U.S. Dep’t of Def. (Feb. 12, 2019), https://media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/c.PDF [hereinafter DoD Strategy].
16. Exec. Order, supra note 9.
18. Kathryn Dura, New Defense Policy a Reminder That US is Not Alone in AI Efforts, C4ISRNET (Aug. 28, 2018), https://www.c4isrnet.com/opinion/2018/08/28/new-defense-policy-a-reminder-that-us-is-not-alone-in-ai-efforts.
19. U.S. Dep’t of Def., Dir. 3000.09, Autonomy in Weapons Systems (21 Nov. 2012) [hereinafter DoD 3000.09].
20. Daisuke Wakabayashi and Scott Shane, Google Will Not Renew Pentagon
Contract That Upset Employees, N.Y. Times (June 1, 2018), https://www.nytimes.com/2018/06/01/technology/google-pentagon-project-maven.html.
21. Patrick Tucker, Pentagon Seeks a List of Ethical Principles for Using AI in War, DefenseOne (Jan. 4, 2019), https://www.defenseone.com/technology/2019/01/pentagon-seeks-list-ethical-principles-using-ai-war/153940.
23. Jonathan Shaw, Artificial Intelligence and Ethics: Ethics and The Dawn of Decision-Making Machines , Harv. Mag. (Jan.-Feb. 2019), http://www.harvardmag.com/pdf/2019/01-pdfs/0119-HarvardMag.pdf.
24. Will Knight, Forget Killer Robots-Bias Is the Real AI Danger, MIT Tech. Rev. (Oct. 3, 2017), https://www.technologyreview.com/s/608986/forget-killer-robotsbias-is-the-real-ai-danger (quoting John Giannandrea, Google’s AI Chief).
26. Tucker, supra note 21.
27. Memorandum from Secretary of Defense to Principal Officials of Department of Defense et al., subject: Artificial Intelligence Ethical Principles for the Department of Defense (21 Feb. 2020); See also Defense Innovation Board, AI Principles Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense (2019), https://media.defense.gov/2019/Oct/31/2002204458/1/1/0/DIB_AI_PRINCIPLES_PRIMARY_DOCUMENT.PDF.
28. Major Andrew Bowne, Innovation Acquisition Practices in the Age of AI, Army Law., Iss. 1, 2019, at 75. 1; Daniel S. Hoadley and Nathan J. Lucas, Cong. Research Serv., R.45178, Artificial Intelligence and National Security 18-19 (2018) [hereinafter Hoadley & Lucas].
29. Paul Scharre, Army ofNone 118-119 (2018).
30. Uptin Saiidi, China Could Surpass the US in Artificial Intelligence Tech. Here’s How, CNBC (Dec.13, 2018) https://www.cnbc.com/2018/12/14/china-could-surpass-the-us-in-artificial-intelligence-tech-heres-how.html.; Darrell. M West, Assessing Trump’s artificial intelligence executive order, Brookings (Feb. 12, 2019), https://www.brookings.edu/blog/techtank/2019/02/12/assessing-trumps-artificial-intelligence-executive-order.
31. Echo Huang, China has shot far ahead of the US on deep learning patents, Quart. Magazine (Mar. 2, 2018), https://qz.com/1217798/china-has-shot-far-ahead-of-the-us-on-ai-patents.
32. Hoadley & Lucas, supra note 27, at 19.
33. Vikram Barhat, China is Determined to Steal A.I. Crown From U.S. and Nothing, Not Even A Trade War, Will Stop It, CNBC (May 4, 2018), https://www.cnbc.com/2018/05/04/china-aims-to-steal-us-a-i-crown-and-not-even-trade-war-will-stop-it.html.
34. Jill Dougherty & Molly Jay, Russia Tries to Get Smart about Artificial Intelligence, Wilson Q. (Spring 2018), https://wilsonquarterly.com/quarterly/living-with-artificial-intelligence/russia-tries-to-get-smart-about-artificial-intelligence.
35. Tom Ramstack, Pentagon Says U.S. Military Losing Its Advantage With Artificial Intelligence, The Gazette (Dec. 16, 2018), https://gazette.com/military/pentagon-says-u-s-military-losing-its-advantage-with-artificial/article_b225e198-ffb4-11e8-9d38-3716e8a47e98.html.
36. Kai-Fu Lee, How Does The Artificial Intelligence Scene In China Compare To The United States?, Forbes (Oct. 4, 2018), https://www.forbes.com/sites/quora/2018/10/04/how-does-the-artificial-intelligence-scene-in-china-compare-to-the-united-states/#7d9d3b1d7a6f.
37. Aileen Kin, DARPA Invests Billions Toward the Future of Artificial Intelligence, Geo. L. Tech. Rev. (October 2018), https://georgetownlawtechreview.org/darpa-invests-billions-toward-the-future-of-artificial-intelligence/GLTR-10-2018.
38. Kelsey D. Etherton, The Pentagon Doesn’t Know How Much It Is Spending on AI, C4ISRNET (Oct. 9, 2018), https://www.c4isrnet.com/c2-comms/2018/10/09/the-pentagon-doesnt-know-how-much-it-is-spending-on-ai.
39. 3 David A. Drabkin et al., Report of the Advisory Panel on Streamlining and Codifying Acquisition Regulations, at 20 (2019), https://discover.dtic.mil/wp-content/uploads/809-Panel-2019/Volume3/Sec809Panel_Vol3-Report_Jan2019_part-1_0509.pdf [hereinafter 3 Report of the Advisory Panel].
40. Id. at EX-1.
41. National Defense Authorization Act for Fiscal Year 2016, Pub. L. No. 114-92, § 809, 129 Stat. 726 (2015) [hereinafter NDAA § 809].
42. 2 David A. Drabkin et al., Report of the Advisory Panel on Streamlining and Codifying Acquisition Regulations, at 102 (2018), https://section809panel.org/wp-content/uploads/2018/06/Sec809Panel_Vol2-Report_June18.pdf [hereinafter 2 Report of the Advisory Panel].
43. Moshe Schwartz & Heidi Peters, Cong. Research Serv., R45068, Acquisition Reform in the 2016-2018 (NDAAS) 1-2 (2018); Bowne, supra note 27.
44. Bowne, supra note 27; NDAA § 809, supra note 40, §§ 804, 806.
45. Authority of the Department of Defense to Carry Out Certain Prototype Projects, 10 U.S.C. § 2371b (2015).
46. Hywel Roberts, Increased Competition Drives Innovation in Manufacturing, HR Mag. (Aug. 4, 2014), http://www.hrmagazine.co.uk/article-details/increased-competition-drives-innovation-in-manufacturing.
47. See, e.g., DFARS 219.2 (Apr. 2018).
48. See supra note 43.
49. See supra note 38, at 12.
50. Bowne, supra note 27; Telephone Interview with Brendan M. McCord, AI Program Manager for Defense Innovation Unit (Oct. 19, 2018); NDAA § 809, supra note 40, § 806.
51. See supra note 44, para. (f); Bowne, supra note 27.
52. See supra note 38, at 13-14.
53. Bowne, supra note 27.
54. See supra note 38, at 14.
55. U.S. Gov’t Accountability Office, GAO-16-80, Defense Acquisition Workforce: Actions Needed to Guide Planning Efforts and Improve Workforce Capability (2015); Stephen Goodrich, The Top 10 Silent Killers of Government Efficiency and Effectiveness, Gov. Exec. (Nov. 7, 2017), https://www.govexec.com/excellence/management-matters/2017/11/top-10-silent-killers-government-efficiency-and-effectiveness/142325/.
56. U.S. Gov’t Accountability Office, GAO-16-209, Federal Acquisitions: Use of ‘Other Transaction’ Agreements Limited and Mostly for Research and Development Activities at 5, 12, 14 (2016) [hereinafter GAO-16-209]; U.S. Gov’t Accountability Office, GAO-08-1088, Department of Homeland Security: Improvements Could Further Enhance Ability to Acquire Innovative Technologies Using Other Transaction Authority at 3 (2008).
57. GAO-16-209, supra note 55, at 5.
58. GAO-16-209, supra note 55, at 24; see supra note 44.
59. GAO-16-209, supra note 55.
60. Competition Requirements, Pub. L. No. 110-417 § 253, 122 Stat. 239, 4546 (2008) (codified in 41 U.S.C. § 253 (2010).
61. See supra note 38, at 12-13; National Defense Authorization Act for Fiscal Year 2017, Pub. L. No. 114-328, 130 Stat. 2000 (2017).
62. McCord, supra note 60.
63. See supra note 38, at 11.
65. Richard Kuzma et al., Good Will Hunting: The Strategic Threat of Poor Talent Management, War on the Rocks (Dec. 13, 2018), https://warontherocks.com/2018/12/good-will-hunting-the-strategic-threat-of-poor-talent-management.
66. NDAA § 238, supra note 3; DoD Strategy, supra note 15.
67. Cheryl Pellerin, Project Maven to Deploy Computer Algorithms to War Zone by Year’s End, DoD News (July 21, 2017), https://dod.defense.gov/News/Article/Article/1254719/project-maven-to-deploy-computer-algorithms-to-war-zone-by-years-end/.; supra note 46.
68. Policy Letter 11-01, Performance of Inherently Government and Critical Functions Off. of Mgmt. & Budget, Off. of Fed. Procurement, 76 Fed. Reg. 176 (2011), https://www.govinfo.gov/content/pkg/FR-2011-09-12/pdf/2011-23165.pdf; 2 Report of the Advisory Panel, supra note 4, at 151.
69. U.S. Const. art. II, § 2, cl. 2.
70. See supra note 67.
72. Wakabayashi and Shane, supra note 20.
73. Telephone Interview with John Beieler, Program Manager for Intelligence Advanced Research Projects Activity (Oct. 15, 2018).
74. Civil Service Reform Act, Pub. L. No. 95-454, §§ 701, 703(a)(2), 92 Stat. 1111, 1217 (1978) (establishing the federal service labor management relations statute) (codified in 5 U.S.C. §§ 7101-7154 (2012)).
75. Classification, 5 U.S.C. §§ 5101-5115 (2012); Pay Rates and Systems, 5 U.S.C. §§ 5301-5392 (2012).
76. See supra note 41, at 80-81.
77. See supra note 74; Rani Molla, Facebook, Google, and Netflix Pay A Higher Median Salary Than Exxon, Goldman Sachs, or Verizon,” Recode (Apr. 30, 2018), https://www.recode.net/2018/4/30/17301264/how-much-twitter-google-amazon-highest-paying-salary-tech.
78. Labor Management and Employee Relations, 5 U.S.C. §7103b; Exec. Order No. 13819, 82 Fed. Reg. 61431 (Dec. 27, 2017); Exec. Order No. 13760, 82 Fed. Reg. 5325 (Jan. 12, 2017).
79. Jared Serbu, Army Boss: Civilian Hiring Process Broken, Should Be Moved From OPM, Federal News Network (Nov. 14, 2018), https://federalnewsnetwork.com/dod-reporters-notebook-jared-serbu/2018/11/army-boss-civilian-hiring-process-broken-should-be-moved-from-opm.
81. Arjun Kharpal, Stephen Hawking Says AI Could Be The ‘Worst Event In The History Of Our Civilization, CNBC (Nov. 6, 2017), https://www.cnbc.com/2017/11/06/stephen-hawking-ai-could-be-worst-event-in-civilization.html.