China’s Personal Information Law to Profoundly Impact Credit Fintechs, Experts Call for Algorithms to Be Included under Anti-trust Regulation

884

Members of industry see the official launch of China’s “Personal Information Law” (个人信息保护法) on 2 November to have a far-reaching implications for the country’s fintech platforms that rely upon the use of big data.

Chief amongst those to be affected will be loan facilitation platforms, and companies that use big data for credit ratings purposes.

“In the past a number of loan facilitations organisations and big data companies would frequently breach regulations to obtain large amounts of personal information, in order to secure more information on customers,” said one source from a big data company to Cailianshe.

“They would use various types of information to search for and improve upon the personal information they had at hand, and use this as the foundation for establishing big data databases to be used for credit ratings services.”

A large number of loan facilitation companies provide banks with third party credit rating services after helping them to obtain customers, while many Internet platforms are currently on the search for licensed credit companies to serve as cooperative partners, in order to continue operating following the Chinese central bank’s call for an end to big data abuses.

Ma Zhitao (马智涛), chief information officer of Tencent-backed private bank WeBank, expects the implementation of the law to drive the use of Distributed Data Transfer Protocols (DDTP), in order to satisfy its requirements concerning the right to data portability.

Zhang Xiaohui (张晓慧), head of the Qinghua Wudaokou School of Finance, expects Chinese regulators to place greater scrutiny upon the algorithms employed by China’s giant Internet platforms.

“The main algorithms of big tech companies must be subject to external regulation and increases in transparency,” said Zhang at the Bund Finance Summit.

“With regard to regulation of algorithms, it will be necessary to establish principles of openness and transparency, in order to ensure that users receive fair treatment and effectively perform risk and impact assessments of automation policies in advance, so as to avoid the risk created by the abuse of algorithms.

“In future, consideration should be given to including algorithms under anti-trust regulation.”