What's new

Facebook is Using AI to Identify Terrorists

SherDil

FULL MEMBER
Joined
Mar 15, 2016
Messages
1,719
Reaction score
0
Country
Turkey
Location
Pakistan
Facebook founder Mark Zuckerberg has outlined a plan to let artificial intelligence (AI) software review content posted on the social network.



While describing the roadmap, Mark claimed that the Facebook algorithms would be able to spot out bullying, violence, terrorism and even those with suicidal thoughts. Mark admitted that previously when some specific content was wiped off from the social network, it was a mistake.

He also said it would take years of hard work for such algorithms to be developed, the ones that review content and approve it on Facebook.

Errors
In his letter where he discussed the future of Facebook, Mark communicated that it was not possible to review the billions and billions of posts and messages that appear on the website every day.

“The complexity of the issues we’ve seen has outstripped our existing processes for governing the community.”- Mark Zuckerberg

This social media platform has been criticized in 2014, when reports said that one of the killers of Fusilier Lee Rigby communicated online about murdering a soldier, months before the attack took place.

Citing another incident, Mark emphasized on the removal of video-graphics related to the Black Lives Matter movement. He also cited the example of the historic ‘napalm girl’ photograph from Vietnam, saying that these examples went to show some “errors” that were present in the existing process of letting AI review content.

He also claimed that facebook is monitoring the site and researching systems that can read text and look at photographs and videos in order to predict in case anything dangerous might be happening.

“This is still very early in development, but we have started to have it look at some content, and it already generates about one third of all reports to the team that reviews content. Right now, we’re starting to explore ways to use AI to tell the difference between news stories about terrorism and actual terrorist propaganda.”

Personal filtering
Mark claimed that his ultimate goal was to allow the Facebook users to post generally regarding whatever they liked or disliked, as long as the content is within the law. Later, with the help of algorithms, things could be more automated by detecting what content has been uploaded, and having it withstand scrutiny by AI. After this approval process, users will then be able to use filters in order to remove the types of post they did not want to see on their newsfeed.

“Where is your line on nudity? On violence? On graphic content? On profanity? What you decide will be your personal settings, for those who don’t make a decision; the default will be whatever the majority of people in your region selected, like a referendum. It’s worth noting that major advances in AI are required to understand text, photos and videos to judge whether they contain hate speech, graphic violence, sexually explicit content, and more. At our current pace of research, we hope to begin handling some of these cases in 2017, but others will not be possible for many years.”

The plan was welcomed by the Family Online Safety Institute, which is a member body of Facebook’s own security advisory board.

Source: BBC
 
Facebook founder Mark Zuckerberg has outlined a plan to let artificial intelligence (AI) software review content posted on the social network.



While describing the roadmap, Mark claimed that the Facebook algorithms would be able to spot out bullying, violence, terrorism and even those with suicidal thoughts. Mark admitted that previously when some specific content was wiped off from the social network, it was a mistake.

He also said it would take years of hard work for such algorithms to be developed, the ones that review content and approve it on Facebook.

Errors
In his letter where he discussed the future of Facebook, Mark communicated that it was not possible to review the billions and billions of posts and messages that appear on the website every day.

“The complexity of the issues we’ve seen has outstripped our existing processes for governing the community.”- Mark Zuckerberg

This social media platform has been criticized in 2014, when reports said that one of the killers of Fusilier Lee Rigby communicated online about murdering a soldier, months before the attack took place.

Citing another incident, Mark emphasized on the removal of video-graphics related to the Black Lives Matter movement. He also cited the example of the historic ‘napalm girl’ photograph from Vietnam, saying that these examples went to show some “errors” that were present in the existing process of letting AI review content.

He also claimed that facebook is monitoring the site and researching systems that can read text and look at photographs and videos in order to predict in case anything dangerous might be happening.

“This is still very early in development, but we have started to have it look at some content, and it already generates about one third of all reports to the team that reviews content. Right now, we’re starting to explore ways to use AI to tell the difference between news stories about terrorism and actual terrorist propaganda.”

Personal filtering
Mark claimed that his ultimate goal was to allow the Facebook users to post generally regarding whatever they liked or disliked, as long as the content is within the law. Later, with the help of algorithms, things could be more automated by detecting what content has been uploaded, and having it withstand scrutiny by AI. After this approval process, users will then be able to use filters in order to remove the types of post they did not want to see on their newsfeed.

“Where is your line on nudity? On violence? On graphic content? On profanity? What you decide will be your personal settings, for those who don’t make a decision; the default will be whatever the majority of people in your region selected, like a referendum. It’s worth noting that major advances in AI are required to understand text, photos and videos to judge whether they contain hate speech, graphic violence, sexually explicit content, and more. At our current pace of research, we hope to begin handling some of these cases in 2017, but others will not be possible for many years.”

The plan was welcomed by the Family Online Safety Institute, which is a member body of Facebook’s own security advisory board.

Source: BBC

Well we all know if this can also be exploited for a $1 they will certainly do that too.
 
Let me guess, anything negative about Jews any 'anti-semetism', or Israel will go into the terrorist pile, anything defending Islam, Muslims etc will end up in the same pile. That info will then be passed to LEA's no doubt. Internet censorship on the way.
 
everyday we come closer in creating skynet
Terminator1001.jpg
 
The first stage in affinity data !

FB has tonnes of insightful affinity data which can be used to arrive at a correlation value with being a terrorist !

From purely analytic point of view, FB would also require a whole bunch of certified terrorists as active user whose behavior can generate the model's training data :lol:
 
Last edited:
The first stage in affinity data !

FB has tonnes of insightful affinity data which can be used to arrive at a correlation value with being a terrorist !

From purely analytic point of view, FB would also require a whole bunch of certified terrorists as active user who behavior can generate the model's training data :lol:
:undecided: what if FB turns out out to be a Fox news fan?
 
As soon as technology is more cheap every country will have its own facebook.

Your understanding of business concepts are pretty freaking far from reality !

Maybe you should try to learn something before you dish out such blanket statements like that.

:undecided: what if FB turns out out to be a Fox news fan?

I didn't understand the joke (if there was one).
 
Yep , time for a Muslim Facebook, our AI will nab Trump and his Zionist gang.

Its not about nabbing some one it about being fair. I feel my posts on FB reach only to 3% of my friends.
@Providence Facebook is not fair. So by law of nature its not gonna exist as sole social media in the long run.
 
Yep , time for a Muslim Facebook, our AI will nab Trump and his Zionist gang.

Pray do you have any idea about the graph theoretic models which is fundamental to success of any social networking site ?

I will drop a clue. It ain't tech knowledge and neither it is religion :lol:

Its not about nabbing some one it about being fair. I feel my posts on FB reach only to 3% of my friends.
@Providence Facebook is not fair. So by law of nature its not gonna exist as sole social media in the long run.

Yes. FB till as late as 2011 never would restrict the reach of any page or profile. So if your page has a decent amount of followers with pretty viral content, you could have technically grown to any levels. This allowed a lot of early startups to grow organically and compete with established players for mind space. This consequently had a negative impact on the FB's ad revenues.

Since 2012, FB has gradually restricted the reach of your content to just 5% of your network now. Artificially they have restricted the visibility of your content so that you pay to earn better reach.
FB is no longer a medium for early startups to grow.
 

Back
Top Bottom