TechCrunch recently sat down with Shelley McKinley, GitHub’s chief legal officer, to delve into the complexities surrounding GitHub’s Copilot and the groundbreaking EU AI Act.
Three years in the making, the EU AI Act recently made waves as it secured approval from the European Parliament, marking a significant milestone as “the world’s first comprehensive AI law.” Designed to address the expanding influence of AI in our daily lives, this new legal framework aims to regulate AI applications based on their perceived risks, tailoring rules and provisions according to the specific application and context.
GitHub, now under Microsoft’s umbrella since its acquisition for $7.5 billion in 2018, has emerged as a prominent voice in expressing concerns, particularly regarding the potential legal liabilities for open-source software developers stemming from ambiguous language within the regulations.
McKinley’s journey to GitHub traces back to her tenure at Microsoft, where she held various legal roles spanning hardware divisions like Xbox and Hololens. Transitioning into the role of chief legal officer at GitHub nearly three years ago, McKinley found herself navigating a multidisciplinary landscape. Beyond the typical legal matters encompassing commercial contracts, product, and HR issues, her responsibilities extend to driving GitHub’s accessibility mission, ensuring all developers can leverage their tools and services seamlessly.
Furthermore, McKinley oversees environmental sustainability initiatives, aligning closely with Microsoft’s broader sustainability objectives. In the realm of trust and safety, McKinley plays a role in moderating content to cultivate an inclusive and secure environment for developers on GitHub.
However, amidst these multifaceted responsibilities, McKinley’s role has become increasingly entwined with the evolving terrain of AI. With the recent approval of the EU AI Act, McKinley’s expertise takes centre stage as GitHub navigates the implications of this landmark legislation.
During her conversation with TechCrunch in London, McKinley discussed the intricate intersections between AI, open-source dynamics, and GitHub’s legal landscape.
GitHub’s Battle for Developer Rights
GitHub enables collaborative software development, offering a platform where users can host, manage, and share code repositories with a global audience. While companies have the option to privatise repositories for internal projects, GitHub’s success has largely been driven by open-source collaboration, where developers worldwide contribute to projects in a public setting.
Since its acquisition by Microsoft six years ago, GitHub has witnessed a transformative shift in the technological landscape. The rise of AI, once a budding phenomenon, has now permeated mainstream consciousness with breakthroughs like ChatGPT and DALL-E.
“I would say that AI is taking up [a lot of] my time — that includes things like ‘how do we develop and ship AI products?’ and ‘how do we engage in the AI discussions that are going on from a policy perspective?’ as well as ‘how do we think about AI as it comes onto our platform?’” McKinley told TechCrunch.
The progression of AI has relied significantly on open-source contributions, where collaboration and the exchange of data have fueled the development of groundbreaking AI systems. A prime example of this relationship is OpenAI, once celebrated for its commitment to open-source principles. However, the organisation has since shifted toward a more proprietary approach.
When it comes to Europe’s AI regulations, concerns loom over the unintended consequences for the open-source community. Critics argue that stringent regulations could stifle innovation and deter developers from contributing to open-source AI projects.
“Regulators, policymakers, lawyers … are not technologists,” McKinley told TechCrunch. “And one of the most important things that I’ve personally been involved with over the past year is going out and helping to educate people on how the products work. People just need a better understanding of what’s going on so that they can think about these issues and come to the right conclusions in terms of how to implement regulation.”
Central to GitHub’s lobbying efforts is the preservation of developer incentives within the open-source ecosystem. With over 100 million developers worldwide, GitHub serves as a cornerstone of the open-source movement, fueling what many see as the fourth industrial revolution. McKinley underscores GitHub’s commitment to fostering developer collaboration and accelerating human progress, framing it as mission-critical rather than merely ‘fun to have’ or ‘nice to have.’
As the dust settles on the EU AI Act, GitHub’s advocacy efforts yield tangible results, with exemptions granted for AI models released under free and open-source licences. This victory signifies a step forward in safeguarding developer rights and preserving open-source innovation. McKinley credits GitHub’s relentless lobbying efforts for reshaping the regulatory landscape.
“That is a direct result of the work that we’ve been doing to help educate policymakers on these topics,” she told TechCrunch. “What we’ve been able to help people understand is the componentry aspect of it — there’s open source components being developed all the time, that are being put out for free and that [already] have a lot of transparency around them — as do the open source AI models. But how do we think about responsibly allocating the liability? That’s really not on the upstream developers; it’s just really downstream commercial products. So I think that’s a really big win for innovation, and a big win for open source developers.”
Copilot: Navigating the AI Frontier in Software Development
Since its debut three years ago, Copilot has captivated developers worldwide, revolutionising coding practices with its intuitive suggestions akin to Gmail’s Smart Compose feature. Developed in collaboration with OpenAI, Copilot leverages the vast repository of OpenAI Codex, trained on extensive public source code and natural language models.
Yet, amidst the excitement surrounding Copilot’s capabilities lies a debate within the developer community. The launch of Copilot’s commercial version in 2022 sparked outcry from developer segments, including the Software Freedom Conservancy, which urged open source developers to leave GitHub following Copilot’s release.
The fundamental question posed by Copilot revolves around authorship and attribution. As it generates code snippets reminiscent of existing code, the issue of crediting original developers becomes paramount.
Contrary to popular opinion, open-source software does not equate to a free-for-all. While open-source licences vary in their restrictions, proper attribution remains a cornerstone principle. However, attributing code becomes challenging when Copilot draws from a vast pool of sources, blurring the lines of authorship.
McKinley emphasises that Copilot’s function as generative AI distinguishes it from a mere copy-and-paste tool. While similarities may arise with publicly available code, Copilot’s output reflects common coding practices rather than direct replication. GitHub acknowledges concerns raised by developers and strives to address them responsibly, she told TechCrunch.
Legal challenges have accompanied Copilot’s ascent, with allegations of copyright infringement surfacing from U.S. software developers. GitHub attempts to foster transparency and compliance by introducing features like duplication detection and code referencing, allowing developers to navigate licensing requirements effectively.
As GitHub continues to refine Copilot in response to community feedback, McKinley acknowledges the diverse perspectives shaping the platform’s evolution.
“There are a lot of opinions out there — there are more than 100 million developers on our platform,” McKinley told TechCrunch. “And there are a lot of opinions between all of the developers, in terms of what they’re concerned about. So we are trying to react to feedback from the community, proactively take measures that we think help make Copilot a great product and experience for developers.”
What Lies Ahead?
As the EU AI Act moves forward, it’s clear that a new age of AI rules is approaching. While its trajectory is now clearer, the journey towards implementation is just beginning, and companies will be granted a two-year period before they have to comply with the new law.
As McKinley puts it, the formulation of technical standards will be pivotal in navigating this regulatory landscape. Much like the harmonised privacy standards under GDPR, establishing unified standards for AI compliance will be imperative. Ensuring that developers and open-source communities have a seat at the table in these discussions will be of key importance, she said.
Beyond the EU’s jurisdiction, other regulations are also on the horizon. President Biden’s recent executive order underscores the United States’ proactive stance on AI safety and security. While the EU emphasises fundamental rights, the U.S. focuses on cybersecurity and combating deep-fakes. However, both converge on the principle of a risk-based approach, recognizing the importance of mitigating potential threats, she said.
“I think taking a risk-based approach is something that we are in favour of — it’s the right way to think about it,” McKinley added.