LinkedIn Algorithm Updates: What's Changing?

LinkedIn Algorithm and Gender Bias Experiment
In November, a product strategist, identified as Michelle (a pseudonym), conducted an experiment on LinkedIn. She altered her profile, changing her gender to male and her name to Michael, as reported to TechCrunch.
This action was part of a project known as #WearthePants, designed to investigate a potential bias within LinkedIn’s new algorithm affecting women.
User Complaints and Algorithm Changes
Over several months, numerous active LinkedIn users reported a noticeable decrease in both engagement and impressions on the platform. This occurred following an announcement in August by Tim Jurka, LinkedIn’s Vice President of Engineering, regarding the implementation of Large Language Models (LLMs) to enhance content surfacing for users.
Michelle, whose identity is known to TechCrunch, found the changes concerning. Despite possessing over 10,000 followers and regularly writing posts for her husband, who has approximately 2,000 followers, she observed similar impression counts on their respective posts.
“Gender appeared to be the only significant differentiating factor,” she stated.
Results of the Experiment
Marilynn Joyner, a founder, also participated by modifying her profile gender. After consistently posting on LinkedIn for two years, she had observed a decline in post visibility. “Switching my profile gender from female to male resulted in a 238% increase in impressions within just one day,” she shared with TechCrunch.
Similar outcomes were documented by Megan Cornish, Rosie Taylor, Jessica Doyle Mekkes, Abby Nydam, Felicity Menzies, Lucy Ferguson, and others.
LinkedIn’s Response
LinkedIn maintains that its algorithm and AI systems do not utilize demographic data – such as age, race, or gender – as a factor in determining content, profile, or post visibility within the Feed.
The company further asserts that comparing individual feed updates, which may not be perfectly representative or have equal reach, does not automatically indicate unfair treatment or bias.
Expert Analysis
Experts specializing in social algorithms concur that overt sexism is unlikely to be the root cause. However, the possibility of implicit bias influencing the algorithm remains.
Brandeis Marshall, a data ethics consultant, described platforms as “an intricate symphony of algorithms that pull specific mathematical and social levers, simultaneously and constantly,” as she explained to TechCrunch.
“Altering one’s profile picture and name represents just one of these levers,” she noted, adding that the algorithm is also affected by a user’s past and current interactions with content.
“The complete range of factors that cause this algorithm to prioritize one individual’s content over another remains unknown. This is a far more complex issue than many realize,” Marshall concluded.
Investigating Potential Gender Bias on LinkedIn
The #WearthePants initiative was launched by two entrepreneurs, Cindy Gallop and Jane Evans, to explore a potential disparity in content visibility.
Their core question revolved around whether gender played a role in the reduced engagement experienced by many women on the platform. Gallop and Evans possessed a combined following exceeding 150,000 individuals, while the two men participating in their experiment had approximately 9,400 followers.
Observed Discrepancies in Content Reach
Gallop observed that her published content was shown to only 801 users. Conversely, a male participant sharing the identical content reached 10,408 people – surpassing 100% of his follower base.
Further participation from other women, including Joyner, a business marketer utilizing LinkedIn, raised concerns about potential algorithmic bias.
Joyner expressed a desire for LinkedIn to acknowledge and address any inherent biases within its content selection algorithm.
The Opacity of Algorithmic Training
However, LinkedIn, similar to other platforms reliant on Large Language Models (LLMs), provides limited transparency regarding the training processes of its content-ranking models.
Marshall suggests that these platforms often “inherently embody a white, male, Western-centric perspective” due to the demographics of those involved in model training.
Researchers have identified evidence of human biases, such as sexism and racism, within popular LLMs, as these models are trained on human-generated data and frequently involve human oversight in post-training refinement.
The precise implementation of AI systems by individual companies remains largely concealed within the “algorithmic black box.”
LinkedIn’s Response and Explanation
LinkedIn maintains that the #WearthePants experiment did not definitively demonstrate gender bias against female users.
In an August statement, and reiterated by Sakshi Jain, LinkedIn’s Head of Responsible AI and Governance in November, the company asserted that its systems do not utilize demographic information to influence content visibility.
Instead, LinkedIn explained to TechCrunch that it conducts tests on millions of posts to optimize user connections to relevant opportunities.
Demographic data is utilized solely for these testing purposes, to ensure equitable competition among creators and consistency in the user feed experience, according to the company.
LinkedIn has a documented history of researching and refining its algorithm to foster a less biased user experience.
Exploring Potential Contributing Factors
Marshall posits that unidentified variables likely contribute to the observed variations in impressions, such as the impact of participating in a viral trend.
Furthermore, accounts posting after a period of inactivity may receive an algorithmic boost.
The style and tone of writing could also be influential; Michelle, for instance, reported a 200% increase in impressions and a 27% rise in engagement after adopting a more direct and simplified writing style while posting under a male pseudonym.
She concluded that the system did not exhibit “explicit sexism,” but appeared to assign lower value to communication styles commonly associated with women.
The Role of Writing Style Stereotypes
Concise writing is often stereotypically associated with men, while women’s writing is often perceived as softer and more emotionally driven.
If an LLM is trained to prioritize writing conforming to male stereotypes, this represents a subtle, implicit bias – a phenomenon researchers have frequently identified in LLMs.
The Influence of User Profiles and Behavior
Sarah Dean, a computer science assistant professor at Cornell, notes that platforms like LinkedIn consider entire user profiles, in addition to user behavior, when determining content promotion.
This includes professional experience listed on a profile and the types of content a user typically interacts with.
“A person’s demographics can impact ‘both sides’ of the algorithm – what they see and who sees their posts,” Dean stated.
LinkedIn confirmed to TechCrunch that its AI systems analyze hundreds of signals to determine content prioritization, including profile insights, network connections, and user activity.
The company emphasizes ongoing testing to identify content most relevant to users’ careers, and acknowledges that member behavior – clicks, saves, and engagement – dynamically shapes the feed.
Shifting Algorithmic Priorities
Chad Johnson, a LinkedIn sales expert, observed a shift in algorithmic focus, with reduced emphasis on metrics like likes, comments, and reposts.
He suggests the LLM system now prioritizes content demonstrating understanding, clarity, and value.
Challenges in Determining Causation
These factors collectively complicate the task of definitively attributing any observed #WearthePants results to a single cause.
User Dissatisfaction with LinkedIn's Algorithm
Despite its implementation, LinkedIn’s latest algorithm appears to be met with disapproval or a lack of understanding from a significant portion of its user base, spanning various demographics.
Shailvi Wakhlu, a consultant specializing in data science, shared with TechCrunch that her consistent posting schedule of at least one post daily for five years previously generated thousands of impressions. Currently, she and her husband are experiencing impressions in the hundreds. She described this as discouraging for content creators who have cultivated a substantial and dedicated following.
Varied Experiences in Engagement
Reports regarding engagement levels are inconsistent. One individual reported a 50% decrease in engagement over recent months. Conversely, another user noted a more than 100% increase in post impressions and reach during the same period.
This increase, they explained to TechCrunch, is attributable to a focused content strategy targeting specific audiences with specialized topics – a strategy the new algorithm seems to favor. Clients of this user are also observing similar positive trends.
Concerns Regarding Potential Bias
Marshall, a Black content creator, suspects that posts detailing her personal experiences receive less visibility compared to those addressing her racial identity. “If Black women are only acknowledged when discussing issues specific to Black women, and not when sharing their professional expertise, this indicates a potential bias,” she stated.
Researcher Dean suggests the algorithm may be simply amplifying pre-existing patterns within the platform. Posts gaining traction might be rewarded not due to the author’s demographics, but because of a historical pattern of positive response to similar content.
While Marshall’s observations raise concerns about implicit bias, Dean cautions that anecdotal evidence alone is insufficient to confirm such a bias definitively.
LinkedIn's Perspective on Current Trends
LinkedIn has provided some insight into what currently performs well on the platform. The company reports a 15% year-over-year increase in overall posting activity and a 24% year-over-year rise in comments.
“This increased activity translates to greater competition within the user feed,” LinkedIn explained. Content focusing on professional insights, career advice, industry news and analysis, and educational material related to work, business, and the economy are currently experiencing strong performance.
A Call for Transparency
A common sentiment among users is confusion regarding the algorithm’s functionality. Michelle expressed a desire for greater transparency from LinkedIn.
However, given the historical tendency of companies to protect the inner workings of content-selection algorithms – and the potential for manipulation that transparency could enable – fulfilling this request is unlikely.
This article has been updated to correct the spelling of Shailvi Wakhlu’s name.





