I have mixed feelings about whether programmers will be completely replaced by AI. It is certainly on the cards that given a large enough model, a lot of programming jobs could be automated. As you say in the post, humans might just be reviewing the code at that point, so knowledge of coding would still be valuable but more from the point of view of reading. However, I also wonder if this will lead to a point where humans are not experienced enough to be able to catch bugs in such generated code? I mean, a senior programmer is so good at their job because they have spent several years writing code, they have seen and fixed a ton of issues in production so they know what things to look for when reviewing code. This would be a lost art.
For really massive, complex and critical projects, such as the Linux kernel or a database engine, I think we will still need skilled programmers to continue their work. Such projects are extremely complex and large for an AI model to meaningfully take over.
The other question is of innovation. Sometimes when faced with an engineering problem, humans come up novel ways to solve it in order to meet the constraints of the domain, this might mean inventing a new data structure, a new algorithm, or using an existing technique in novel way. I'm not very sure if these models can come up with such ideas, they would tend to stick to what they have seen in their training data.
I think re: the representation of different types of employees in layoffs it's important to normalize by the percent of the workforce made up by those employees. So instead of the percent of layoffs made up by software engineers, it would show the percent of software engineers laid off -- because tech companies are SWE heavy, so it makes sense that a uniform layoff across employee types would still be made up mostly by SWEs.
I have mixed feelings about whether programmers will be completely replaced by AI. It is certainly on the cards that given a large enough model, a lot of programming jobs could be automated. As you say in the post, humans might just be reviewing the code at that point, so knowledge of coding would still be valuable but more from the point of view of reading. However, I also wonder if this will lead to a point where humans are not experienced enough to be able to catch bugs in such generated code? I mean, a senior programmer is so good at their job because they have spent several years writing code, they have seen and fixed a ton of issues in production so they know what things to look for when reviewing code. This would be a lost art.
For really massive, complex and critical projects, such as the Linux kernel or a database engine, I think we will still need skilled programmers to continue their work. Such projects are extremely complex and large for an AI model to meaningfully take over.
The other question is of innovation. Sometimes when faced with an engineering problem, humans come up novel ways to solve it in order to meet the constraints of the domain, this might mean inventing a new data structure, a new algorithm, or using an existing technique in novel way. I'm not very sure if these models can come up with such ideas, they would tend to stick to what they have seen in their training data.
I think re: the representation of different types of employees in layoffs it's important to normalize by the percent of the workforce made up by those employees. So instead of the percent of layoffs made up by software engineers, it would show the percent of software engineers laid off -- because tech companies are SWE heavy, so it makes sense that a uniform layoff across employee types would still be made up mostly by SWEs.