HTTP Request Status - Reference Implementation

Greater control for the client from the server. Give more control of application to client side application user from server side application owner/originator.

See discussion here

Revising the HTTP protocol to balance client side application user requirements will also help toward a proper Services Oriented Architecture SOA.

It would help in the user select destination of services required. That is the domain and resource to provide the service required. Instead of relying solely on the URL/URI imbedded in the hypertext provided by the server.

Move away from vendor lock in to services by single service provider from a single domain. This would require considering better componentisation and modularisation on the client side request.

The client side request status is the domain of the client and not the server. The request for business services.

In an fully flagged client side SO architecture these might include business, information, application, and technology service. In the current state of the art these might be some part of a XaaS provision.

The HTTP Request Status also permits a preferred services provider list similar to a whitelist for web sites. It would also permit a prohibited services provider list similar to a blacklist for web sites. This capability probably more important for levelling the playing field for users and user groups and consumer groups.

For example in the United Kingdom; Which ‘testing, reviews, and advice’, Citizens Advice ‘confidential information and advice to assist people with legal, debt, consumer, housing and other problems’, and so on.

Choice for the user. Anywhere in the internet ecosystem the user might want to exercise choice. In regard to better competition for digital goods and services. Whatever the criteria chosen by the individual user or user group to express a preference.

The HTTP browser, which might include other seven layer model capability, is the channel for internet engagement. The user must be given greater freedom to determine the engagement with internet provided goods and services.

The current server side model drives and constrains user attention and economic interaction and restricts choice.

This is also a security issue. Bad actors of many types. Consumer organisations and or government in times of international tension. Crime prevention strategies.

Lists of domains and/or resources that might provide user risk including inter alia; financial risk, legal risk, reputational risk, criminal risk, bad actor risk, and so on.

In regard to information risk, aka epistemic risk, the specific epistemic domains might be served by specific internet domains. For example in relation to climate change in the UK the Met Office in the US NOAA, and so on. Supplying criteria for evaluation of services and information providers in the domains of weather and climate.

For financial risk in the UK the FCA might provide a similar view on financial services providers. For example consumer groups interested in low risk ethical financial services might include in respective services those of . This implies locale specific tailoring of services for users with might include participation of regional and local; chamber of commerce, third sector, local government, . And so on.

Some of these might be from gov orgs like the UK National Cyber Security Centre NCSC . Similarly in the UK for NCA or US FBI . Also for example the UK Metropolitan Police or US NYPD. To aid avoiding online criminal harm, for example in the UK dial a drug and county lines drug distribution. The implication here being multi device and multi channel particularly in regard to Safeguarding.

See discussion here

These might be mediated by third parties like consumer or rights groups. The user could then choose to take the lists from gov orgs from trusted third party orgs or their own cognisance. Implies the necessity for certificates and a regulatory framework for trusted information providers.

Lists might likely by reference, how they are collated, in whole or part, at runtime is an implementation issue and not a concern for this topic. Possibly mediated by the application layer at launch instantiation time. But some sort hot real time update capability would be a requirement. For example in a mass cyber attack scenario.

HTTP X01 Risk
HTTP X02 Risk information <epistemic risk, fake news, misinformation, disinformation, bot farms, >
HTTP X03 Risk reputational <>
HTTP X04 Risk financial <scams, >
HTTP X05 Risk legal <against the law in a particular jurisdiction, >
HTTP X06 Risk criminal <phishing, social engineering, etc, >
HTTP X07 Risk technical <ransomware, hacking, etc, >
and so on.

The choice always with the user risk appetite. But forewarned is forearmed. The browser user settings such that these list updates are automatic or not. These might include different profiles for different risk appetites for different use cases. See discussion here on browser profile settings. Which is the what should be cached, and updated on a periodic basis, and what must be hot runtime and by pulled reference or possibly pushed.

The internet is not the Utopian academic information publishing and exchange it was first conceived to be in CERN at its inception. The context has change with its ubiquity so change manage to mitigate the current state inherent risk with next state client side request status.


<industrialised world, global south, others, >


ISO 31000 Risk Management,

ISO 31000, Wikipedia
Committee of Sponsoring Organizations of the Treadway Commission, Wikipedia,
Risk Management, Wikipedia,

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.