a topic other researchers don't see as important,
Curiously, that chart also claims a significant increase in “code quality”, and other parts of the report (page 30, for example) claim a significant increase in “productivity”, alongside the significant increase in delivery instability, which seems like it ought to be a contradiction. As far as I can tell, DORA’s source for both “productivity” and “code quality” is perceived impact as self-reported by survey respondents. Other studies and reports have designed less subjective and more quantitative ways to measure these things. For example, this much-discussed study on adoption of the Cursor LLM coding tool used the results of static analysis of the code to measure quality and complexity. And self-reported productivity impacts, in particular, ought to be a deeply suspect measure. From (to pick one relevant example) the METR early-2025 study (emphasis added by me):
,更多细节参见搜狗输入法
Эксперты озвучили прогноз по срокам возобновления транспортного сообщения на Ближнем Востоке14:51
17岁的二级工程专业学生亨利认为,这次讲座让他看到了进入太空领域的可能性。“我原本不知道本地就有太空科技公司,”他坦言。