In Dec 2023 Microsoft published a report in their Future of Work series, this time focused on artificial intelligence — no surprise there. Direct PDF access here.
The document is an excellent summary of a lot of research papers on the topic, and provides a good bird eye’s view on the current state of the topic, with some emphasis on generative AI and LLMs.
I’ve read the report myself and tried to extract the parts that made sense to me and could provide some good information in the future for my own practice. Given the density of the paper this is clearly just one of the possible summaries. It’s also important to note imho that some papers would need to be read in detail for context, that Microsoft has a vested interest to highlight benefits, and that some papers referred to can be quite dated and might not apply to the current state of the tech.
Here’s the summary, divided in some relevant categories:
- General Info
- Studies show +37% faster and +40% higher quality, but +19% incorrect solutions (1 out of 5).
- “Skills not directly to content production, such as leading, dealing with critical social situations, navigating interpersonal trust issues, and demonstrating emotional intelligence, may all be more valued in the workplace”.
- Writing effective prompts requires multiple iterations.
- Generative AI demands greater metacognition (ability to analyze, understand, and control one’s own thought processes). It can also support users’ metacognition.
- Problem Solving
- Using LLMs to challenge assumptions instead of assisting the answers can be a very beneficial approach.
- With LLMs taking the writing task, humans can spend more time in analyzing and integrating (editing).
- People often over-rely on AI. Over-reliance leads to poorer performance than either human or AI acting alone. AI literacy and careful design is needed.
- Developers
- Copilot for writing code seem to have a lot of benefits (but note it might have biases as it’s a Microsoft product).
- Sometimes however generated code can introduce subtle bugs.
- People Management
- Using LLMs to break down tasks in steps can be beneficial, especially when delegating (I’d add this could also benefit some people with neurodiversities, like ADHD).
- AI solutions could support managers in workflow planning, freeing up time.
- In case of failure of AI advice, the blame can fall on the person. Organizations need to prepare for these scenarios.
- Design
- Designing LLMs to annotate parts that have low confidence leads to improved accuracy.
- It’s fundamental to preserve people’s agency over the creative process.
- In education LLMs can be used as coaches.
- LLMs can rapidly analyze non-quantitative data for social research use.
- Knowledge Management
- Knowledge stored in chats and documents can be made available by LLMs and can be made accessible in a conversational way.
- LLMs can be trained on fragmented organization knowledge and make it accessible.
- Introduction on AI into any organization is an inherently sociotechnical process.
I appreciate how much the paper gets into behavioural details and how this can affect and change how we work. I still feel some of these areas are too young to know for sure a trend, but the change is clearly there. It’s also interesting to see the emphasis on review and collaboration, both for direct users and for people that build these tools: the clear message is to not over-rely on AI, but a sprinkle of it can be very beneficial.