You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: chapter-1/106.md
+14-1Lines changed: 14 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -5,17 +5,30 @@ title: "[译] [106] The skills we need"
5
5
6
6
7
7
## 1.6 The skills we need
8
+
## 1.6 我们所需的技能
8
9
9
10
If Copilot can write our code, explain it, and fix bugs in it, are we just done? Do we just tell Copilot what to do and celebrate our pure awesomeness?
No. It's true that some of the skills that programmers rely upon (writing correct syntax, for example) will decrease in importance. But other skills remain critical. For example, you cannot throw a huge task at Copilot like,
"Make a video game. Oh, and make it fun." Copilot will fail. Instead, we need to break down such a large problem into smaller tasks that Copilot can help us with. And how do we break a problem down like that? Not easily, it turns out. This is a key skill that humans need to hone in their conversations with tools like Copilot, and a skill that we will teach throughout the book.
Other skills, believe it or not, may take on even more importance with Copilot than without. Testing code has always been a critical task in writing code that works. We know a lot about testing code written by humans, because we know where to look for typical problems. We know that humans often make programming errors at the boundaries of values. For example, if we wrote a program to multiply two numbers, it’s likely that we’d get it right with most values but maybe not for when one value is 0. What about code written by AI, where twenty lines of flawless code could hide one line so absurd that we likely wouldn't expect it there? We don’t have experience with that. We need to test even more carefully than before.
18
25
26
+
信不信由你,有些技能在使用 Copilot 的时候可能比不用时更为重要。测试代码始终是编写可靠代码的关键任务。我们对于测试人类编写的代码有很多了解,因为我们知道应该在哪里寻找常见的问题。我们知道,人们在处理值的边界条件时经常会出错。例如,如果我们编写一个程序来乘两个数,大部分时候我们可能会做得很好,但对于其中一个值是 0 的情况可能就不行。那么,对于 AI 编写的代码,如果在二十行完美的代码中隐藏着一行我们完全不会预料到的荒唐代码呢?我们对此尚无经验。因此,我们需要比之前更加细致地进行测试。
27
+
19
28
Finally, some required skills are entirely new. The main one here is called _prompt engineering_, which involves how to tell Copilot what to do. When we're asking Copilot to write some code, we're using a _prompt_ to make the request. It's true that we can use English to write that prompt and ask for what we want, but that alone isn't enough. We need to be very precise if we want Copilot to have any chance of doing the right thing. And even when we are precise, Copilot may still do the wrong thing. In that case, we need to first identify that Copilot has indeed made a mistake, and then tweak our description to hopefully nudge it in the right direction. In our experience, seemingly minor changes to the prompt can have outsized effects on what Copilot produces.
Copy file name to clipboardExpand all lines: chapter-1/107.md
+17Lines changed: 17 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -5,19 +5,36 @@ title: "[译] [107] Societal concerns about AI code assistants like Copilot"
5
5
6
6
7
7
## 1.7 Societal concerns about AI code assistants like Copilot
8
+
## 1.7 大众会对于像 Copilot 这样的 AI 编程助手的担忧
8
9
9
10
There's a lot of societal uncertainty right now about AI code assistants like Copilot. We thought we'd end the chapter with a few questions and our current answers. Perhaps you've been wondering about some of these questions yourself! Our answers may turn out to be hilariously incorrect, but they do capture our current thoughts as two professors and researchers who have dedicated their careers to teaching programming.
10
11
12
+
如今,社会对于像 Copilot 这样的 AI 代码助手充满了不确定性。我们想以几个问题及我们目前的答案结束这一章,你可能也在思考其中的一些问题!虽然我们的回答将来可能会显得有些荒谬,但它们确实反映了我们作为两位专注于编程教学的教授和研究者当前的想法。
13
+
11
14
Q: Are there going to be fewer tech and programming jobs now that we have Copilot?
12
15
16
+
Q: 现在有了 Copilot,技术和编程岗位会减少吗?
17
+
13
18
A: Probably not. What we do expect to change is the nature of these jobs. For example, we see Copilot as being able to help with many tasks typically associated with entry-level programming jobs. This doesn't mean that entry-level programming jobs go away, only that they change as programmers are able to get more done given increasingly sophisticated tools.
Q: Will Copilot stifle human creativity? Will it just keep swirling around and recycling the same code that humans have already written, limiting introduction of new ideas?
A: We suspect not. Copilot helps us work at a higher level, further removed from the underlying machine code, assembly code, or Python code. Computer scientists use the term _abstraction_ to refer to the extent that we can disconnect ourselves from low level details of computers. Abstraction has been happening since the dawn of computer science, and we don't seem to have suffered for it. On the contrary, it enables us to ignore problems that have already been solved and focus on solving broader and broader problems. Indeed, it’s been the advent of better programming languages that have facilitated better software – software that powers Google search, Amazon shopping carts, and Mac OS weren’t written (and likely could not have been written) when we only had assembly!
18
27
28
+
A: 我们认为不会。Copilot 使我们能够在更高层面上进行工作,远离了底层机器码、汇编语言或 Python 代码。计算机科学家用“抽象”这一术语来描述我们与计算机底层细节脱离的程度。抽象自计算机科学诞生之初就在进行,我们并没有因此遭受损失。相反,它让我们能够忽略那些已经解决的问题,专注于解决越来越广泛的问题。事实上,正是更好的编程语言的出现,推动了更高质量软件的开发——那些驱动 Google 搜索、亚马逊购物车和 Mac OS 的软件,并非在我们仅有汇编语言时编写的(可能也确实无法编写)!
29
+
19
30
Q: I keep hearing about ChatGPT. What is it? Is it the same as Copilot?
20
31
32
+
Q: 我一直在听人说 ChatGPT,这是什么?它和 Copilot 相同吗?
33
+
21
34
A: It's not the same as Copilot, but it's built on the same technology. Rather than focus on code, though, ChatGPT focuses on knowledge in general. And as a result, it has insinuated itself into a wider variety of tasks than Copilot. For example, it can answer questions, write essays, and even do well on a Wharton MBA exam \[7\]. Education will need to change as a result: we cannot have people ChatGPT’ing their ways to MBAs! The ways in which we spend our time in worthwhile ways may change. Will humans keep writing books, and in what ways? Will people want to read books knowing that that they were partially or fully written by AI? There will be impacts across industries, including finance, health care, and publishing \[8\]. At the same time, there is unfettered hype right now, so it can be difficult to separate truth from fiction. This problem is compounded by the simple truth that no one knows what's going to happen here long-term. In fact, there’s an old adage coined by Roy Amara (known as Amara’s Law) that says, “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” As such, we need to do our best to be tuned into the discussion so that we can adapt accordingly.
22
35
36
+
A: ChatGPT 和 Copilot 不相同,但它们是基于同一种技术构建的。不过,与专注于编码的 Copilot 不同,ChatGPT 更多地关注于广泛的知识领域。因此,它已经融入了比 Copilot 更多样化的任务中。例如,它可以回答问题、撰写论文,甚至能够在沃顿商学院的 MBA 考试中取得好成绩 \[7\]。这意味着教育需要随之而变:我们不能让人们只靠使用 ChatGPT 就获得 MBA!我们花费时间的方式可能需要变化。人们会继续写作书籍吗?以何种方式?当人们知道书籍可能部分或完全由 AI 编写时,他们还会愿意阅读吗?这将对金融、医疗保健、出版等行业产生影响 \[8\]。同时,目前存在大量的过度炒作,因此很难区分事实与虚构。这个问题因为长期发展未知而更加复杂。实际上,有一句由 Roy Amara 提出的老话(阿马拉法则)指出:“我们倾向于高估技术在短期内的影响,而低估其长期影响。”因此,我们需要尽最大努力保持对讨论的关注,以便我们能够适当地适应。
37
+
23
38
In the next chapter, we’ll get you started with using Copilot on your computer so you can get up and running writing software.
0 commit comments