The term "Large Language Models" is what GPT-4 is.
See page 16 for a summary.
Math majors are toast.
The "model" categories are explained in the paper.
WORKING PAPER
Group Occupations with highest exposure % Exposure
Human πΌπΌπΌ
Interpreters and Translators 76.5
Survey Researchers 75.0
Poets, Lyricists and Creative Writers 68.8
Animal Scientists 66.7
Public Relations Specialists 66.7
Human π½π½π½
Survey Researchers 84.4
Writers and Authors 82.5
Interpreters and Translators 82.4
Public Relations Specialists 80.6
Animal Scientists 77.8
Human πππ
Mathematicians 100.0
Tax Preparers 100.0
Financial Quantitative Analysts 100.0
Writers and Authors 100.0
Web and Digital Interface Designers 100.0
Model πΌπΌπΌ
Mathematicians 100.0
Correspondence Clerks 95.2
Blockchain Engineers 94.1
Court Reporters and Simultaneous Captioners 92.9
Proofreaders and Copy Markers 90.9
Model π½π½π½
Mathematicians 100.0
Blockchain Engineers 97.1
Court Reporters and Simultaneous Captioners 96.4
Proofreaders and Copy Markers 95.5
Correspondence Clerks 95.2
Model πππ
Accountants and Auditors 100.0
News Analysts, Reporters, and Journalists 100.0
Legal Secretaries and Administrative Assistants 100.0
Clinical Data Managers 100.0
Climate Change Policy Analysts 100.0
Highest variance Search Marketing Strategists 14.5
Graphic Designers 13.4
Investment Fund Managers 13.0
Financial Managers 13.0
Insurance Appraisers, Auto Damage 12.6
https://arxiv.org/pdf/2303.10130.pdf
I'm glad I'm a retired.
PS - Many leaders and funders of AI development have signed an open letter calling for a six month pause in the training of current AI models and the development of even more advanced models.
https://futureoflife.org/open-letter...i-experiments/
GPT-3.5 was infamous for its unequal treatment of political views. When one considers how the political spectrum has shifted totally to one side despite the about equal numbers of adherents on either side, one begins to wonder if GTP isn't already being used to devise political strategies which are then applied.
See page 16 for a summary.
Math majors are toast.
The "model" categories are explained in the paper.
WORKING PAPER
Group Occupations with highest exposure % Exposure
Human πΌπΌπΌ
Interpreters and Translators 76.5
Survey Researchers 75.0
Poets, Lyricists and Creative Writers 68.8
Animal Scientists 66.7
Public Relations Specialists 66.7
Human π½π½π½
Survey Researchers 84.4
Writers and Authors 82.5
Interpreters and Translators 82.4
Public Relations Specialists 80.6
Animal Scientists 77.8
Human πππ
Mathematicians 100.0
Tax Preparers 100.0
Financial Quantitative Analysts 100.0
Writers and Authors 100.0
Web and Digital Interface Designers 100.0
Model πΌπΌπΌ
Mathematicians 100.0
Correspondence Clerks 95.2
Blockchain Engineers 94.1
Court Reporters and Simultaneous Captioners 92.9
Proofreaders and Copy Markers 90.9
Model π½π½π½
Mathematicians 100.0
Blockchain Engineers 97.1
Court Reporters and Simultaneous Captioners 96.4
Proofreaders and Copy Markers 95.5
Correspondence Clerks 95.2
Model πππ
Accountants and Auditors 100.0
News Analysts, Reporters, and Journalists 100.0
Legal Secretaries and Administrative Assistants 100.0
Clinical Data Managers 100.0
Climate Change Policy Analysts 100.0
Highest variance Search Marketing Strategists 14.5
Graphic Designers 13.4
Investment Fund Managers 13.0
Financial Managers 13.0
Insurance Appraisers, Auto Damage 12.6
https://arxiv.org/pdf/2303.10130.pdf
I'm glad I'm a retired.
PS - Many leaders and funders of AI development have signed an open letter calling for a six month pause in the training of current AI models and the development of even more advanced models.
https://futureoflife.org/open-letter...i-experiments/
Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.β
Comment