Samsung Bans Employees from Using Popular AI Tools Over Data Leak Fears

Sam­sung recent­ly pro­hib­it­ed its employ­ees from using pop­u­lar AI tools like Chat­G­PT and Google Bard due to data secu­ri­ty con­cerns, accord­ing to Bloomberg. The tech giant took action after an engi­neer acci­den­tal­ly shared sen­si­tive infor­ma­tion on Chat­G­PT last month.

In an inter­nal memo, Sam­sung told staff that it is “tem­porar­i­ly restrict­ing the use of gen­er­a­tive AI” until prop­er secu­ri­ty mea­sures are in place to ensure safe and pro­duc­tive use of the tech­nol­o­gy. The com­pa­ny said its head­quar­ters is review­ing poli­cies to pre­vent anoth­er data leak.

While many com­pa­nies encour­age the use of AI tools to boost effi­cien­cy, Sam­sung joins the ranks of major banks like JPMor­gan Chase, Bank of Amer­i­ca and Cit­i­group in ban­ning them over data pri­va­cy risks.

The pol­i­cy is like­ly wel­comed by many Sam­sung employ­ees who share sim­i­lar con­cerns. An inter­nal April sur­vey found 65% of Sam­sung staff believed AI sys­tems pose secu­ri­ty threats.

Although Sam­sung is lim­it­ing broad access to third-par­ty AI tools, the com­pa­ny con­tin­ues devel­op­ing its own AI soft­ware for tasks like soft­ware engi­neer­ing and trans­la­tion. The move high­lights the bal­anc­ing act com­pa­nies face in max­i­miz­ing the ben­e­fits of AI while mit­i­gat­ing the risks. By prompt­ly respond­ing to the recent data leak inci­dent, Sam­sung is pri­or­i­tiz­ing cyber­se­cu­ri­ty and pri­va­cy over some effi­cien­cy gains, at least temporarily.

Be the first to comment

Leave a Reply

Your email address will not be published.


*