Between the Trump Administration declaring private accounts a red flag for foreign students and prospective employers using AI to find fake applicants, the old rules are becoming obsolete.
For college students looking for jobs or internships, the standard advice about social media has been this: Build up your professional profile on LinkedIn, but scrub other social media accounts (the ones displaying your political opinions or party antics) or just make them private.
Yet recent developments could make that playbook obsolete. The most jarring is the Trump Administration’s order to U.S. consular personnel to require those applying for student and cultural exchange visas to set their social media to public, so as to allow a review of their “entire online presence.” The State Department is explicit about what it’s looking for: indications of “hostility” to the U.S. government or culture, as well as views it considers supportive of terrorism or antisemitism. Those who do not make their accounts public could have their applications denied. In addition, an absence of a social media presence can also be held against an applicant, as a possible effort to evade scrutiny of their true views.
“You’re damned if you do post, damned if you don’t,” sighs one international student who requested anonymity because she fears undermining her own immigration status. The new policy has foreign students doing everything from requesting their columns be removed from student publications to painstakingly undoing any of their Instagram likes on pro-Palestinian or anti-Trump posts. (It must be done manually and takes many clicks. But this way, their accounts will still be public and active, just not political.)
While visa applicants clearly have the most to worry about, American students, too, are facing a potential Catch 22. What they’ve said on social media can hurt them when they are job hunting. Yet erasing or cloaking their public online presence can potentially backfire in less obvious and predictable ways, as some prospective employers adopt AI driven screening of social media to determine if applicants are real–and even a cultural fit.
“It creates a double bind: students are told to curate or clean up their profiles for professionalism, yet efforts to control their digital presence can be framed as suspicious or evasive,” says Paromita Pain, an associate professor of global media at the University of Nevada, Reno.
This is a new twist. Research published in 2019 suggested that making accounts private didn’t hurt, and might even help, job hunters. “In general, hiring managers saw those that used privacy settings and had strict privacy settings as slightly favorable, I think because they understood they know how to manage confidential information,” explains Chris Hartwell, an associate professor of management at Utah State University who conducted that research.
To be clear, U.S. employers haven’t explicitly said they’re going to demand private accounts be made public or that they’ll hold the existence of private or deleted accounts against prospective workers, the way the State Department has with visa applicants. Indeed several states, including California, Maryland and New York, have laws that explicitly bar employers from asking for access to private social accounts.
But artificial intelligence is beginning to change hiring practices in significant ways. AI has led to an explosion of fake (or stolen) identities, and fake job candidates. In one notorious case, an Arizona woman was just sentenced to 102 months in prison for her role in an elaborate scheme that used stolen U.S. identities to place North Koreans in remote information technology jobs at 309 U.S. companies.
A fourth of candidates applying to any job could be fake by 2028, research and advisory firm Gartner predicts. The technology is getting good enough that in just 70 minutes, a novice AI user can create a fake profile and masquerade as a real person during a virtual interview with a recruiter or hiring manager. In March, Dawid Moczadlot, cofounder of Vidoc Security Labs, posted a video on LinkedIn of an interview he cut short with a job candidate who was using AI to mask his true appearance. It was the second time he’d encountered that ruse in two months, he wrote.
So employers have good reason to be wary, which is spawning a whole new business of AI pre-screening of applicants’ social media. Companies are looking to make sure people are real, and also in some cases, a cultural fit.
For example, Tofu, a small, two-year-old startup, pivoted last September to using machine learning and AI to screen applicants’ social media profiles and publicly available information to corroborate their real identity. “The point is to find them before hiring managers spend time interviewing fake applicants,” explains Jason Zoltak, Tofu’s cofounder and CEO.
Some of the things Tofu looks for include age of social accounts, posting and liking activity and even the number of LinkedIn connections. Tofu will also report to a prospective employer the last found date of a deleted profile, as well as any accounts that seem empty. A typical fake candidate might have a LinkedIn account that’s about four months old with two or three connections or an empty Instagram or TikTok profile, Zoltak says,
So what does this mean for students and workers who’ve scrubbed their online presence? They won’t necessarily get flagged as fake candidates, says Zoltak, although he doesn’t entirely rule out that possibility. But, he points out, there are other ways to validate people are real, including checking the age of the email address they used to apply for a job, as well as their phone numbers and carriers and metadata in profiles.
The advice here for students: Set up a LinkedIn account and the email address you’ll use for job hunting well before you start your search. A LinkedIn account might seem like a no-brainer. But Elizabeth Soady, a member of the career services advising team at the University of Richmond, says because students have heard for years about the dangers of their digital footprint, many are actually hesitant to use social media profiles for professional purposes.
They’re also still scrubbing away. Delete Me CEO Rob Shavell reports students are increasingly turning to services like his as they approach graduation. “All of a sudden [younger users are] realizing they’ve been very cavalier about the information they were sharing online and how it’s showing up everywhere,” he says. “It dovetails really nicely with their first job search.”
But erasing too much personal and private information from the internet could backfire as screening tools like Tofu use that information to verify you’re a real person.
It’s not just a search for fake applicants that AI is spurring. Companies are increasingly screening social media history as part of their background check for future hires. Darrin Lipscomb, founder and CEO of Ferretly, says he launched the company in 2019 to vet social media activity and online presence (including news articles) as a part of security clearances. But now, with the spread of AI, he’s offering the same sort of screening for pre-employment checks.
In addition to working with police departments and political campaigns, Ferretly has signed up companies across business sectors to vet not only high-ranking executives, but also customer-facing employees and even social media influencer partners. The company is up to over 40 employees and 1,000 clients and even vetted candidates for the NFL draft. It has also worked with Democratic and Republican congressional campaign committees in the United States and political parties in the U.K., Australia and Canada to vet political candidates, staff and appointees.
For every candidate, Ferretly produces a social media report that includes behavioral trends (including likes and post count), sentiment towards specific topics, and engagement analysis (private versus public accounts). The company makes no judgment on the contents of the report, Lipscomb emphasizes.
“We’re providing a tool to be able to look at a profile and ask ‘does this person represent our values as an organization? Are they going to strengthen our culture?’” he says. “You can apply that same thinking to the student visa application. Is this person going to strengthen the American culture?” In other words, even if the practices aren’t exactly the same, there’s a correlation between the State Department’s enhanced viewpoint screening and what’s gaining a foothold in the private sector.
In today’s charged political atmosphere, and with today’s AI capabilities, American students aren’t crazy to worry about how their social media will be used.
Last year, college guidance site Intelligent.com surveyed 672 recent and current students who said that had participated in pro-Palestinian protests on campus. More than half said they were always (11%), often (19%) or sometimes (23%) asked about their activism and 28% said they had removed online evidence of that activism. In addition, 29% said they’d had a job offer rescinded in the last six months, with 68% of those who lost an offer believing the decision was either “definitely” or “probably” linked to their activism.
Beyond political activity, Pain, the Nevada prof, warns students to think twice before sharing anything identity-based (including sexual identity and religion) or related to mental health or disabilities. Even with state anti-discrimination laws, she says “bias still operates subtly in hiring.”
So what about those laws?
Rod M. Fliegel, an employment lawyer and co-chair of Littler’s background check practice group, says that 28 states regulate what information garnered from social media or internet searches employers can use as part of their hiring decisions. “Just by posting something public doesn’t give an employer free reign to consider the information during the hiring process,” he says.
Of course recruiters or employers could still unwittingly come across protected information in public profiles. Which is why, if they want to avoid legal trouble, Fliegel urges they set company-wide policy and guardrails describing what they can use during pre-employment screening.
Which raises the question of whether such limits will be built into any AI screens. College students looking to enter a tough entry job market would probably be wise not to count on it.
More from Forbes
Read the full article here