You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
title: "[译] [104] What else can Copilot do for us?"
3
+
title: "[译] [104] Copilot 还能为我们做什么?"
4
4
---
5
5
6
6
7
7
## 1.4 What else can Copilot do for us?
8
8
9
+
## 1.4 Copilot 还能为我们做什么?
10
+
9
11
As we’ve seen, we can use Copilot to write Python code for us starting from an English description of what we want. Programmers use the word _syntax_ to refer to the symbols and words that are valid in a given language. So, we can say that Copilot takes a description in English syntax and gives us back code in Python syntax. That's a big win, because learning programming syntax has historically been a major stumbling block for new programmers. What kind of bracket— \[, (, or { —am I supposed to use here? Do I need indentation here or not? What's the order that we're supposed to write these things: x and then y, or y and then x?
10
12
13
+
如我们所观察到的,借助 Copilot,我们能够从一段英语描述开始编写 Python 代码。程序员用 “语法” 一词来描述在特定语言中有效的符号与词汇。因此,我们可以说 Copilot 能够将英语语法的描述转换成 Python 语法的代码。这是一大进步,因为历史上,学习编程语法常常是新手程序员的一大障碍。我在这里应该使用哪种括号——\[、(、还是 { 呢?这里需要缩进吗?我们书写这些元素的顺序应该是怎样的:是先 x 后 y,还是先 y 后 x?
14
+
11
15
Such questions abound and let's be honest: it's uninteresting stuff. Who cares about this when all we want to do is write a program to make something happen? Copilot can help free us from the tedium of syntax. We see this as an important step to help more people successfully write programs, and we look forward to the day when this artificial barrier is completely removed. For now, we still need Python syntax, but at least Copilot helps us with it.
***Explaining code**. When Copilot generates Python code for us, we’ll need to determine whether that code does what we want. Again, as we said above, Copilot is going to make mistakes. We’re not interested in teaching you every nuance of how Python works (that’s the old model of programming). We _are_ going to teach you how to read Python code to gain an overall understanding of what it does. But we’re also going to use the feature of Copilot that explains code to you in English. When you finish with this book and our explanations, you’ll still have Copilot available to help you understand that next bit of gnarly code that it gives you.
***Making code easier to understand**. There are many different ways to write code to accomplish the same task. Some of them may be easier to understand than others. Copilot has a tool that can reorganize your code to make it easier for you to work with. For example, code that’s easier to read is often easier to enhance or fix when needed.
***Fixing bugs**. A _bug_ is a mistake made when writing a program that can result in the program doing the wrong thing. Sometimes, you’ll have some Python code, and it almost works, or works almost always but not in one specific circumstance. If you’ve listened to programmers talk, you may have heard the common story where a programmer would spend hours only to finally remove one = symbol that was making their program fail. Not a fun few hours! In these cases, you can try the Copilot feature that helps to automatically find and fix the bug in the program.
Copy file name to clipboardExpand all lines: chapter-1/105.md
+19-2Lines changed: 19 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -5,23 +5,40 @@ title: "[译] [105] Risks and challenges of using Copilot"
5
5
6
6
7
7
## 1.5 Risks and challenges of using Copilot
8
+
## 1.5 使用 Copilot 的风险和挑战
8
9
9
-
Now that we're all pumped up about getting Copilot to write code for us, we need to talk about the dangers inherent in using AI Assistants. See references
10
+
Now that we're all pumped up about getting Copilot to write code for us, we need to talk about the dangers inherent in using AI Assistants. See references\[2\] and \[3\] for elaboration on some of these points.
10
11
11
-
\[2\]and\[3\] for elaboration on some of these points.
12
+
既然我们都对让 Copilot 帮我们编码感到非常兴奋,接下来我们必须讨论一下使用 AI 助手所固有的危险。有关这些观点的更多详细信息,请参见参考资料 \[2\]和\[3\]。
12
13
13
14
**Copyright**. As we discussed above, Copilot is trained on human-written code. More specifically, it was trained using millions of GitHub repositories containing open-source code. One worry is that Copilot will “steal” that code and give it to us. In our experience, Copilot doesn't often suggest a large chunk of someone else’s code, but that possibility is there. Even if the code that Copilot gives us is a melding and transformation of various bits of other people's code, there may still be licensing issues. For example, who owns the code produced by Copilot? There is currently no consensus on the answer.
The Copilot team is adding features to help; for example, Copilot will be able to tell you whether the code that it produced is similar to already-existing code and what the license is on that code \[4\]. Learning and experimenting on your own is great, and we encourage that—but take the necessary care if you do intend to use this code for purposes beyond your home. We’re a bit vague here, and that’s intentional: it may take some time for laws to catch up to this new technology. It’s best to play it safe while these debates are had within society.
**Education**. As instructors of introductory programming courses ourselves, we have seen first-hand how well Copilot does on the types of assignments we have historically given our students. In one study \[5\], Copilot was asked to solve 166 common introductory programming tasks. And how well did it do? On its first attempt, it solved almost 50% of these problems. Give Copilot a little more information, and that number goes up to 80%. You have already seen for yourself how Copilot solves a standard introductory programming problem. Education needs to change in light of tools like Copilot, and instructors are currently discussing how these changes may look. Will students be allowed to use Copilot, and in what ways? How can Copilot help students learn? And what will programming assignments look like now?
**Code quality**. We need to be careful not to trust Copilot, especially with sensitive code or code that needs to be secure. Code written for medical devices, for example, or code that handles sensitive user data must always be thoroughly understood. It's tempting to ask Copilot for code, marvel at the code that it produces, and accept that code without scrutiny. But that code might be plain wrong. In this book, we will be working on code that will not be deployed at large, so while we will focus on getting correct code, we will not worry about the implications of using this code for broader purposes. In this book, we start building the foundations that you will need to independently determine whether code is correct.
**Code security.** As with code quality, code security is absolutely not assured when we get code from Copilot. For example, if we were working with user data, getting code from Copilot is not enough. We would need to perform security audits and have expertise to determine that the code is secure. Again, though, we will not be using code from Copilot in real-world scenarios.
Therefore, we will not be focusing on security concerns.
24
35
36
+
因此,我们不会将重点放在安全问题上。
37
+
25
38
**Not an expert**. One of the markers of being an expert is awareness of what one knows and, equally importantly, what one doesn't. Experts are also often able to state how confident they are in their response; and, if they are not confident enough, they will learn further until they know that they know. Copilot, and LLMs more generally, do not do this. You ask them a question, and they answer, plain as that. They will confabulate if necessary. They will mix bits of truth with bits of garbage into a plausible sounding but overall nonsensical response. For example, we have seen LLMs fabricate obituaries for people who are alive, which doesn’t make any sense, yet the “obituaries” do contain elements of truth about people’s lives. When asked why an abacus can perform math faster than a computer, we have seen LLMs come up with responses—something about abacuses being mechanical and therefore necessarily the fastest. There is ongoing work in this area for LLMs to be able to say, "sorry, no, I don't know this," but we are not there yet. They don't know what they don't know and that means they need supervision.
**Bias**. LLMs will reproduce the same biases present in the data on which they were trained. If you ask Copilot to generate a list of names, it will generate primarily English names. If you ask for a graph, it may produce a graph that doesn’t consider perceptual differences among humans. And if you ask for code, it may produce code in a style reminiscent of how dominant groups write code. (After all, the dominant groups wrote most of the code in the world, and Copilot is trained on that code.) Computer science and software engineering have long suffered with a lack of diversity. We cannot afford to stifle diversity further, and indeed we need to reverse the trend. We need to let more people in and allow them to express themselves in their own ways. How this will be handled with tools like Copilot is currently being worked out and is of crucial importance for the future of programming. However, we believe Copilot has the potential to improve diversity by lowering barriers for entry into the field.
0 commit comments