Get help from the best in academic writing.

a101d5

Throughout this week’s content, we can see a drastic change in the aesthetic qualities of art in the early Middle Ages. This change is not because of a decline in artistic abilities. Rather, it is a sign of a new ideology concerning the purpose of art in regard to changing lifestyles. Consider these new art trends and analyze Jewish, Early Christian, and Byzantine art.

Paragraph 1: Using an example of Jewish or Early Christian Art, formally analyze the artistic developments based on the following: materials, composition, color, spatial qualities, etc.

Paragraph 2: Using an example of Byzantine Art, compare/contrast their aesthetics with that of Jewish and/or Early Christian art. What is similar? What is different?

Note: Please be sure to formally analyze the art rather than discussing the religious changes of the Middle Ages only

Use this optional source as a guide to formal analysis https://courses.lumenlearning.com/masteryart1/chapter/oer-1-3/

Reference :

Janson’s History of Art The Western Tradition, Reissued Edition, Volume 1, 8th Ed.

Login details

https://www.vitalsource.com/ Login details [email protected] password [email protected]

Last Name 1 Student’s Name Course Instructor Date Culture and Communication Part

Last Name 1

Student’s Name

Course

Instructor

Date

Culture and Communication

Part A

Odysseus symbolizes Dorothy Lee’s perspectives on the confrontation of wide cultural variety in their respective countries. “Now they were serving out the food and pouring wine” (Homer 236). This shows how the odyssey was living a good life. A common misconception about Trobrianders is that they are engaged in subsistence horticultural cultivation. The Trobriand native people are from New Guinea and were touched by the pre-European colonization period, albeit they have no actual record of what happened. Additionally, the descriptions depend on magic, garden activities, and sexual behaviors, as well as other aspects of life that are cultural. This paper will compare and contrast the ideals represented by Odysseus with the values represented by the Trobrianders, as shown by Dorothy Lee in the first and second halves, respectively, of the Odyssey story.

According to Dorothy Lee’s book “Freedom and Culture,” there are qualities in the community that Odysseus has identified from the standpoint of the Trobrianders that should be protected. According to Lee freedom is yet to be achieved “otherwise the term free was not applied to the freedom of self ” (Lee 54). Codifications of reality, primordial culture, and nonlinearity are only a few concerns to consider. Taking into account the fact that an object can not simply update the property and then go on to preserve its original identity is essential. Since the Trobrianders do not explicitly explain their activity in lineal form, they do not carry any dynamic actions that are linked together. Malinowski describes them as follows, “concentrically with the circular row of yam houses there runs a ring of dweling huts” (Lee 111). No upfront agreement caters to the actions that result in aims and means, and there are no teleological or informal connections between the parties involved. Moreover, the continuity does not include any of Trobriand’s text or voice. As a result, according to Malinowski, it is described as “jerky,” meaning that it is handed out at particular moments rather than in lines that link.

According to Malinowski, the “kula” is an organization that enshrines principles that are not strictly linea. For the most part, Malinowski’s scientific attention is directed away from culture as a result of universal phenomena and toward a methodical temporal framework that depends on allowing for systematic cultural investigation, including comparison and particularities. The prerequisite for fieldwork and the comparative analysis of phenomena in various cultures allows for the study of elements of in-depth cultural analysis in certain circumstances. The link between man’s invention and the psychological body reveals the functionalist approach to understanding the world. “Kula” is a precious gift that is available in two varieties: (veigun) and (soulava), and is mostly sold in the northern area. As a result, this Kula gift is similar to “Like a Marriage” because it needs two people who both have to give and do something to finish the gift.

The cultural components of Odysseus and the Trobrianders in lieu of Dorothy and Malinowski’s cultural aspects are presented in the chapter. They are both drawn to the social and cultural aspects of society. Background information on the subject matter has been provided in the introduction of the paper. According to Dorothy Lee’s book “Freedom and Culture,” the second part provides Dorothy Lee’s point of view on culture. Second, a summary of Malinowski’s approach to the “Kula” culture and the historical context in which it is rooted in the society in which it exists. Over time, there has been a significant shift in how civilizations conduct their everyday tasks. Change is unavoidable, and cultural habits will continue to evolve as time progresses.

Part B

Celia’s Song

In the story Celia’s Song, the two-headed snake is a myth, and the protagonists do not think it is a part of actual reality; instead, they consider it to be a fable. As this dialogue takes place, the two-headed snake keeps an eye on the humans. The fact that “that the people had neglected to feed and honour him” (Maracle 20), as they once did when locals governed the nation, has enraged him, and he takes it personally. There is a disagreement between the two heads that dominate the one body of the snake due to this. Loyal, the first head, is a defender of humans, believing that they would again respect the species’ needs, while restless feel that they are the betrayed ones among humankind. Loyal despises this and “despises change” (Maracle 23). Longhouse slowly crumbles when “the two heads argue and as the days wear on the argument heats up until both heads are shouting and twisting to emphasize their points of view” (Maracle 21); the longhouse gradually devolves into ruins. Restless and Loyal continue their battle as shingles from the longhouse, which was formerly inhabited, fall out of place.

Restless continues until the longhouse, which has become infected with bones, is destroyed. The bones “shuffle and click…ready for chaos to come” (Maracle 23). After seeing the snake as it slithers back into the sea, splashing in the water and going more profound, the Loyal nearly drowns himself as he begs for oxygen. Celia decides to investigate more. His havoc in the sea continues, with boats being capsized and the ship growing in size. The waters are agitated by Restless’ haphazard journey through the water. Restless has violated the one rule of the sea, he has murdered humankind, and he is on a course to annihilate human civilization. Amos and Steve have a negative relationship fueled by hatred for one another. Steve is an affluent white male inhabitant who does not seem to be aware of the richness of life in his surroundings. He is primarily interested in advancing his professional career, and he thinks that his current position denigrates him and does not fit with his antisocial disposition.

On the other hand, he accepts this job to afford his university education and improve his future employment chances (Maracle 30). Working a little, travelling to Vancouver, getting paid, and drinking a lot are the only things that Amos is concerned with, as does bullying the lone white man in the group, Steve. Despite his good intentions, Mink views Amos as ugly and cruel. Amos enjoys the quietness of the saw, but Steve finds it painful to hear the sound. Steve is seen as the more kindly of the two and the more goal-oriented of the two. The book also better illustrates how hatred was instilled in the characters. The statement “Amos glares at Steve, hate tangles his insides….and this white man who has access to everything Amos has been denied flaunts it” (Maracle 33). This might be taken as meaning that Amos grew up in a society where white colonialism was the dominant force, resulting in the other children suffering.

Amos’s thirst for revenge shows the serpent’s restless behaviour (Maracle 34). He taunts the audience to kill, to murder. It takes all of Amos’ strength not to laugh away the embarrassment he feels, as well as the urge to murder Steve, to keep himself from killing Steve. As with Steve, Amos is a source of hatred for him (Maracle 34). Specifically, he dislikes the way he engages with his coworkers, his scruffy appearance, and his lack of decency to retain any degree of hygiene. Although they disagree on many things, they evident in their belief that the longhouse is a terrible sign and a symbol of primitivism. In the book, Mink comes upon Amos’s nocturnal ritual in the tent, which he finds fascinating. Amos purposefully refuses to wash regularly, stating he does not want to share space with anyone (Maracle 39). This is a crucial point to remember. He sleeps with a bottle of vodka alongside him daily. He is afraid of waking up in the middle of the night. His dreams turn into nightmares of knives stabbing into his own body, blades retracting covered in blood, chunks of flesh linked to the knives retracting covered in blood. When he’s drunk, his fantasies have him stabbing other people, mainly grown-ups. When he’s drunk, his nightmares are occasionally filled with the screaming of children. As a result, he is solid (Maracle 40). He transforms into the snake that infests his nightmares on the exact same night he wakes up.

Works Cited

Lee, Dorothy. “Codifications of Reality: Lineal and Non-lineal” in Freedom and Culture. Prospect Height, Illinois, Waveland Press.Inc. pp. 105-120. 1987

Lee, Dorothy. “What kind of Freedom” in Freedom and Culture. Prospect Height, Illinois, Waveland Press.Inc. pp. 53-69. 1987

Homer, Park. The Odyssey. 1614

Maracle, Lee. Celia’s Song. Cormorant Books, 2014.

Dissertation Title Name Abstract The dissertation was designed to explore the cyber

a101d5 Writing Assignment Help Dissertation Title

Name

Abstract

The dissertation was designed to explore the cyber risk assessment strategies and propose a hybrid model that a business organization can use to conduct a cyber risk assessment. The growing adoption informed the study of computerized systems and technologies such as the internet of things in the modern business world. As organizations continue to leverage the efficiency and effectiveness of technology, their vulnerability, especially regarding clients’ information, also increases. Therefore, it has become prudent for business organizations to establish cyber security systems. This study used a qualitative content analysis approach. The study found that two frameworks are commonly used to conduct cyber risk security. The frameworks are Operationally Critical Threats and vulnerability Evaluation (OCTAVE) and the Threat Assessment and Remediation Analysis (TARA). Using a weighted scorecard method, the study found that the OCTAVE framework was more comprehensive and effective in cyber risk assessment practices that covers all the potential threat and vulnerabilities. The TARA method was also found to be important in determining the primary vulnerabilities and providing a focused risk-mitigation approach. The study, therefore, recommended a hybrid method in which the TARA is incorporated to explore the core security risk and the OCTAVE to provide a comprehensive assessment and expose other primary and secondary vulnerabilities.

Acknowledgments

\

Table of Contents

Abstract 2

Acknowledgments 3

Glossary 6

List of Figures 7

List of Tables 8

1. Introduction 9

1.1 Problem Definition 9

1.2 Scope 10

1.3 Rationale 10

1.4 Project Aim and Objectives 11

1.4.1 Aim 11

1.4.2 Specific Objectives 11

1.5 Background Information 11

2 Literature Review 15

2.1 Introduction 15

2.2 Theoretical Framework 15

2.3 Review of Related Studies 17

2.3.1 Cyber Risk Assessment 17

2.3.2 Forms of Cyber Risk Assessment 18

2.3.2 Cloud Risk Assessment Model 19

2.3.4 Importance of Cyber Risk Assessment 22

3 Method and Implementation 24

3.1 Introduction 24

3.2 Research Philosophy 25

3.3 Research Approach 28

3.4 Research Design 28

3.5 Sampling Procedure 30

3.6 Data Collection Procedure 32

3.7 Data Analysis 34

3.8 Ethical Considerations 35

4 Evaluation 36

4.1 Introduction 36

4.2 Evaluation Methodology 36

4.2.1 Baseline Models Models 37

4.2.2 Model One – OCTAVE 37

4.2.3 TARA 37

4.2.4 Evaluation Metrics 38

4.3 Results 39

4.4 Discussion 41

5 Conclusions 44

6 Recommendations for future work 46

7 Bibliography 47

Glossary

NIST

OCTAVE

National Institute of Standards and Technology

Operational Critical Threat, Asset, and Vulnerability Evaluation

TARA

Threat Assessment and Remediation Analysis

 

List of Figures

Figure 1: Cybercrime Statistics 11

List of Tables

Table 1: Weighted Scorecard 39

1. Introduction

Problem Definition

The 21st century has been marked by unprecedented technological advancement. The invention of the internet and the computer in the 20th century and the rapid leverage of these techniques at the turn of the millennium have fundamentally changed the information, communication, and technology landscape. As Dalenogare et al. (2018) argue, today’s internet of things has had a significant impact on communication and interaction through the wireless connectivity and interoperability of digital devices. Unlike in the past, when people physically visited their local bank branches for financial transactions, today, the online mobile banking platform has created convenience. People can transact without visiting their bank branches (Laukkanen, 2017). The cloud computing system has, in the last decade, enhanced further the processes and strategies of data processing, storage, transfer, and utilization. Organizations today do not have to have large technological infrastructures such as servers to store large volumes of data since the same services are provided virtually through cloud computing (Sony, & Naik, 2019). The enterprise reporting system (ERS) is one of the major advancements in information and communication technology (Ghobakhloo, 2020).

Despite the significant advancement in information and communication technology, the exposure of a significantly large volume of data online has made the technology vulnerable to malicious activities through cyber attacks (Asghar et al., 2019). Most organizations and systems have adopted the cybersecurity threat by developing robust security systems. Nevertheless, different cybersecurity systems have various threats and vulnerabilities (Usmonov et al., 2017). As such, even the best-designed security system could still be vulnerable to attack. As such, there is a growing need for conducting a cyber risk assessment in computing and technology. However, most studies in this area have largely focused on cyber risks, threats, and vulnerabilities and less on the cyber risk assessment. Therefore, the current study seeks to fill the existing gap by assessing a cyber risk of a selected business organization that uses extensive computing and technology.

Scope

The study is limited to using secondary data or information to assess the need for cyber risk assessment, the issues surrounding cyber risk assessment, the types and forms of cyber risk assessment, and the use of the information to design a cyber risk assessment plan. An ideal would have been to use the secondary information to design a cyber risk assessment plan and test it in an existing computing system. This would, however, require ethical considerations that cannot be met at this level of the research. Further, It would pose a significant risk to the organization where the risk assessment plan is conducted. As such, the study is only limited to developing a potential plan and evaluating its viability based on the assumption that the plan was implemented in the context of an organization.

Rationale

Organizations use different security strategies to protect their computing systems and technologies from malicious damages and other attacks with potentially adverse consequences. As Akinlorabu et al. (2019) argue, various computing systems have varying threats and vulnerabilities to cyberattacks. However, specific vulnerabilities are common for most network and computer systems. For instance, every PC, regardless of its location within an organization, is vulnerable to ransomware malfunction and could, therefore, be used as the source of introducing a malicious attack. The carelessness of workers with their security details also could be a major source of security breaches.

Akinlorabu et al. (2019) recommend that organizations should always assess their system’s vulnerability both in the context of the common threats and vulnerabilities and the ones that are specific to the organization. Therefore, the best security network is one that incorporates or takes care of external and internal threats and vulnerabilities.

The findings of this study would provide useful information to the organization for the assessment and development of a robust computing and technology security system. The study’s findings shall also contribute to additional knowledge in the already growing body of knowledge and research regarding cyber risk security. As Poritskey et al. (2019) observe, the major challenge around the world towards developing comprehensive data protection legislation is the numerous vulnerabilities and threats through which a computerized system can be breached. Therefore, the findings here could help shape the discourse and development of better data protection regulations that focus on the most fundamental areas.

Project Aim and Objectives

1.4.1 Aim

The study aims to conduct an assessment of the cyber risk of computer and technology systems. The specific objectives are listed below.

1.4.2 Specific Objectives

Describe the meaning and uses of a cyber risk assessment.

Interpret the data and information of a cyber risk assessment used in the industry.

Compare companies and businesses that use a cyber risk assessment and a company that does not use a cyber risk assessment with findings and research.

Design and compose a cyber risk assessment with an example scenario to evaluate the findings to support and relate back to my objectives.

Appraise the positives and negatives of implementing cyber risk assessment in the computing and technology industry.

Background Information

Understanding cyber risk assessment requires an understanding of where the concept originates from. Cyber risk is essentially the uncertainty that could occur on the virtual platform and have adverse effects, including the confidential information and compromise of fundamental digital services (Ghadge et al., 2019). Cyber risk, therefore, stems from the concept of cyber security.

According to Thames and Schaefer (2017), cybersecurity is the protection of digital platforms, which encompasses computers, smartphones, mobile phones, computer networks, electronic systems, and data from malicious attacks. The term also refers to a variety of contexts in which security is needed, including network security, which involves securing computer networks from intruders, both opportunistic and targeted. The term also refers to application security, which focuses on the protection of software installed in various devices such as computers and smart devices from threats. A threat may target a smart device such as a phone, which could subsequently be used to introduce ransomware on other multiple devices.

Another form of cybersecurity is information security. According to Sarker et al. (2020), information security involves the protection of the privacy and integrity of the data that is received, stored, and transmitted in a given information and communication system. Other forms of cybersecurity include operation security, which involves protecting the processes and systems of handing data assets.

Over the years, the number of cyberattacks has increased both in variation and scale. According to Bulao (2022), approximately 30,000 websites are hacked daily globally. The report also indicates that globally, 64% of companies have at last experienced one or more cyber-attack. As of March 2021, there were approximately 20 million breaches recorded. The report further reported that ransomware cases have grown by 150% over the past two years. Additionally, Bulao (2022) reports that email is responsible for over 90% of malware attacks. Another report by Tan et al. (2022) indicated that there is always a new attack every 39 seconds. The report further indicates that on a daily basis, approximately 24,000 malicious mobile applications are blocked on the internet.

Over the years, it seems that cybercrime and cyber attacks have been growing, representing the fact that many people and organisations are adopting more and more information and communication technology. Alternatively, the rising cybercrime rate indicates that most computer and technology systems are getting more vulnerable by the day. Figure 1 below shows the growth statistics of cybercrime globally, in terms of companies that have reported one or more cyberattacks every year.

Figure 1: Cybercrime Statistics

Source: https://www.comparitech.com/vpn/cybersecurity-cyber-crime-statistics-facts-trends/#:~:text=Headlinecybercrimestatisticsfor20192021&text=Therewere153millionnew,yearwhichsaw145.8million.

The rising rate of cyber security risk, as depicted in figure 1, is a justification for the need for a more enhanced security system, which makes it important for organisations to conduct cyber risk assessments. Cyber risk assessment evaluates information and communication systems and virtual technology of the threats and vulnerabilities to security breaches. As already mentioned, cyberattacks have adverse consequences for the victim, whether individual, public or private organisation. The assessment of cyber security is a crucial step towards developing a robust cyber security system that is capable of detecting and mitigating potential threats.

Indeed, there are numerous threats and vulnerabilities to the computing system. The enterprise reporting system in most organisations today makes it necessary to use the distributed network operating system (DOS), which allows multiple remote or terminal computers to work and communicate with each other through a common network channel (Walker et al., 2015). The DOS is also the underlying framework of cloud computing. As such, the threats and vulnerabilities of such computing and network system are generally vulnerable to cyberattacks because of the vulnerability of an independent terminal, whose breach could compromise the entire system. Therefore, this project aims to explore the cyber risk assessment practices and develop a risk assessment strategy that various organisations can use to improve their individual securities.

Literature Review

Introduction

This section of the project explores the existing literature regarding cyber risk assessment. The goal of the literature review is always to explore the studies that have been done in the past to help determine or contextualise the current study in the existing body of knowledge. The literature review prevents researchers from simply duplicating studies that would not add significant or meaningful knowledge to what is known. According to Snyder (2019), the purpose of a research study is often to either provide a solution to an existing problem or provide new knowledge or information on what is already existing. In the current project context, the literature review is necessary to establish what has been done regarding cyber risk. This is particularly important since it will enable the study to determine potential gaps which could be included or incorporated into the planned design based on the evaluation herein.

2.2 Theoretical Framework

As Aparicio et al. (2016) describe, a theoretical framework is an underlying theory on which the variables or relationship of the study variables are explained or based. The goal of defining the underlying theory is to be able to explain how one-factor influences or affects the other and the potential outcome. In the context of the current project, for instance, the theoretical framework helps the researcher to determine the underlying framework to explain the specific elements that make a computing system or technology vulnerable, the implications of the threats and vulnerabilities, and how to deal with the same. Usually, several theories exist in an attempt to explain the relationship of variables. It is, therefore, upon the researcher to select the most appropriate theory as it applies to the specific study. In this project, the cyber attack theory is used.

According to Zhao et al. (2022), the cyber attack theory holds that the success of a given attack is always dependent on the information that the attacks have at the time when they commit the security breach. Further, the theory holds that the magnitude of a cyber attack is measured by the amount of crucial or sensitive information that the attackers gain or modify from the attack. The theory holds that information is a fundamental element of cyberattacks. According to Tubis et al. (2020), the information in the context of a cyber attack could be considered with the configuration of the system’s security, such as the login details of a system or the important data that is subsequently interfered with following the breach. Configuration information could also include the specific system operation information which the attackers could use to stop the system’s operations.

In the context of the current project, the theory implies that the cyber risk assessment should focus on the information vulnerability of a given computer system. Depending on the design of the cyber security system, it is evident that the vulnerability of the same to breach is dependent on either the information that enables or facilitates the attack or the one that motivates the same. Major e-commerce companies such as Amazon, for instance, have a volume of crucial clients information, including their credit card information, which attackers could be interested in for several reasons. In such a case, the organisation’s motivation to have a robust security system is essentially influenced by the need to protect the clients’ data. Additionally, Amazon serves millions of customers per day. There are thousands of online transactions and activities on the platform every second; therefore, it is prudent for its computing and network system to stay up and run throughout. Therefore, the organisation would be motivated to protect the system from breaches that would disrupt normal operations.

2.3 Review of Related Studies

2.3.1 Cyber Risk Assessment

Akinlorabu et al. (2019) explored the risk assessment framework of cloud computing systems, which have been the most dominant systems in modern business organisations because of their capacity to enable connectivity and interoperability of various devices. Could computing, for instance, allows a third-party service provider such as an accountant to have access remotely to the organisation’s financial information to enable him or her to conduct the necessary financial services.

According to Akinlorabu et al. (2019), cloud computing systems are always going to be vulnerable or even have inherent risks. As such, the authors describe risk assessment in information and communication technology systems as the management of the potential uncertainties to an acceptable level. The risk management as described herein also involves protecting the systems or preventing the security system from risks that can be predicted.

A study by Mukhopadhyay et al. (2019) argues that risk management in computing systems takes place in two main stages – risk assessment and risk treatment. As such, risk management and treatment are the same things, except that risk assessment precedes management. In essence, risk assessment is the foundation for risk management.

According to Tam and Jones (2018), risk assessment, which is the project’s primary focus, involves two crucial activities: risk analysis and risk evaluation. The authors argue that risk assessment is essentially an iterative process that involves the identification of the potential risk to a specific system, risk analysis, risk prioritisation, mitigation, and consequently, monitoring of the security risks.

Bolbot et al. (2020) also concur with the risk assessment process outlined. The findings specifically emphasise the importance of risk identification and prioritisation. According to the authors, an effective cyber security system depends on the accuracy of predicting the most common risks and providing the appropriate and relevant strategies for mitigating them. Risk prioritisation, as the authors explain, involves the decision or the determination of the risks that are most likely to occur and the gravity of the security system; which concurs with the cyber attack theory that the importance or relevance of the various information is a fundamental determinant of the cyber security system.

Prioritisation of security risk is synonymous with the determination of the threats and vulnerabilities that are of more importance, which is important in designing the security system. If, for instance, it is established that the threat to a given computing network or system is vulnerable to external malware and that the most likely entry point is through remote or terminal computers, then the most effective security strategy is to target the remote computers and implement security measures such as high encryptions.

Suppose a network system has high-security vulnerabilities from different external sources. In that case, it may be effective to focus the security shield or protection on a specific element of the system, such as the server, where the organisation’s data is stored (Yaqoob & Atiquzzaman, 2019). A large organisation with many employees using the distributed operating system, for instance, could be vulnerable to cyber-attack, considering that employees have various behaviors and perceptions towards security. Some employees may be less careful with their passwords, which could ultimately give access to the security system. On the other hand, malicious attackers could use vulnerable employees to voluntarily or forcefully gain access to the organisation’s system. In such a case, the organisation must make the priority of which aspect of the system to prioritise for protection.

2.3.2 Forms of Cyber Risk Assessment

According to Raun (2017), experts, scholars, and practitioners commonly agree on no specific or standard procedure for cyber risk assessment. The author argues that the lack of a risk assessment system, especially for cloud computing, is because of the lack of a structured and standardised cyber risk identification and assessment system and the highly flexible and dynamic nature of cloud computing.

A study by Hu (2015) argues that the lack of cyber risk assessment standard framework is because of the complex nature of the modern computing system, which is largely based on the internet of things that involves the multiple uses of different technologies, various devices, and many users or third-party users. According to Hu (2015), unlike the traditional computing security assessment that was often restricted to a specific organisational IT infrastructure, the current cyber security assessment involves multiple components. As such, the security vulnerability of a given system may originate from multiple sources, some of which the client or an organisation may not have control over. Cloud computing, for instance, involves hosting third parties in the virtual platform. Allowing third-party access to centralised data in cloud computing increases the system’s vulnerability. Hu (2015) further argues that the virtual flow of data or information over physical structures such as fiber optics makes cloud computing and similar technologies more vulnerable to security breaches. The author argues that there is particularly low-security protection on breaches that target the internet.

Despite the lack of a standard structure for cyber risk assessment, Tang et al. (2016) argue that a specific risk assessment model has since been developed following the advent of cloud computing in the 21st century. The model is not standard practice; however, they provide at least the fundamental platform for assessing the potential risks associated with cyber security systems.

2.3.2 Cloud Risk Assessment Model

According to Tang et al. (2016), the cloud risk assessment model is a step-by-step reproducible or repeatable process designed to create an adequate understanding of the risks associated with a specific computing system or network and control. In the context of cyber security common attacks, the cyber risk assessment model repeatable steps are designed to create an understanding of the various risks associated with either relinquishing information to a third party or access to crucial data and information of a given organisation. Another study by Hentschel et al. (2018) argues that the cyber risk assessment model is a tool or tool that is specifically designed for the various stakeholders involved in the cyber security systems to assess the risks associated with their roles and positions in the computer or technology system. The model helps to understand the potential problem areas, analyse the various scenarios from which risks could occur, and ultimately, defensible design systems to protect the system from potential security breaches.

According to Hentschel et al. (2018), the cyber risk assessment process is not the duty of one stakeholder. Instead, it involves numerous parties who have specific interests, roles, and responsibilities in the security system. The author argues that regardless of the perspective that one could examine the modern computing system, it is hard for a single organisation to be in charge of its whole information and communication technology system. Today, the interoperability of things through the internet of things has enhanced interaction with business organisations. For instance, most business organisations conduct their procurement practices online. As such, their computing system is connected and interoperable with other third-party service providers. The cloud computing system, for instance, connects a company’s information system with other third parties, such as the company’s accounting system with the banks, to enhance interaction and communication for efficient service provision.

An important emphasis in Tang et al. (2016) and Hentschel et al. (2018) is that the cyber risk assessment framework must be repeatable. In essence, the organisations and stakeholders involved in the risk assessment process must be able to probe the risk assessment procedure and determine issues that may be associated with the system vulnerabilities and make necessary adjustments. The authors also argue that it is imperative to have the system designed such that independent parties, such as consultants, can also examine and verify its suitability for protecting the cyber security system.

Kandasamy et al. (2020) explored some of the frameworks that are used for cyber risk assessment. The author argues that the risk assessment framework that is commonly used is based on two fundamental attributes – the nature of the approach and the methodology adopted for each risk assessment approach. The study reports that risk assessment can be done through qualitative or quantitative approaches. The National Institute of Standards and Technology (NIST) is the well-documented and most used risk assessment framework. However, even the NIST model does not have a specific framework, which confirms the lack of a standard approach to cyber security issues. While the NIST was generally designed for disaster and emergency response, it has a special consideration for the internet of things.

Kandasamy et al. (2020) fault the NSIT framework for being too general. The authors explore another approach – Operationally Critical Threat, Asset and Vulnerability Evaluation (OCTAVE), a qualitative approach. The major steps involved in the OCTAVE include. Establish the criteria for measuring risks, development of an asset profile, identification of asset containers, identification of concerned areas, examination of risks, and mitigation of the risks. The major shortcoming of the OCTAVE approach is that it uses a standardised questionnaire to categorise recovery impact but does not provide quantification of the risk.

Another cyber risk assessment procedure is the Threat Assessment and Remediation Analysis (TATA). According to Kandasamy et al. (2020), the TATA approach focuses on predicting the most crucial vulnerability or exposure of the internet of things and provides the potential procedure for remediation. The authors argue that one of the major advantages of the TAT approach is that it breaks down the risks into smaller numbers that are manageable. This approach also improves the quality and effectiveness of risk evaluation and management. Lastly, it can enhance the expected risk management outcome due to high precision and targeted measures or remedies. However, a major shortcoming of the procedure is that it does not quantify the risk management impact, nor does it promote the defense of other vulnerable areas of the cyber security system.

According to Lee (2020), another framework that different organisations use to assess cyber risk is ISO compliant. The international Standard Operations have specific considerations of robust cyber network security. An important piece of information regarding ISO as a framework for cyber risk assessment is that there is no specific and standardised approach. Therefore, every industry and organisation have the liberty to explore their systems’ threat and vulnerabilities and determine the most effective cyber security approach. Compliance with ISO standards, however, helps to improve system security. The ISO requirements, for instance, demand regular evaluation of the threats and vulnerabilities and recording of the appropriate measures taken. Therefore, the ISO standards ensure that the system is not vulnerable to attack because of failed regular checking.

2.3.4 Importance of Cyber Risk Assessment

A study by Radanliev et al. (2018) argues that cyber risk assessment has numerous risks to the organisation. However, the study argues that there are indeed limited studies that have explored the impact of cyber risk assessment because of the uncertain nature of cyber risks and, often, the sensitive nature of organisational cyber breach issues. According to the study, most organisations are less trusted to report cyberattacks’ exact economic or reputation damage. The results found that most organisations often engage in public relations to maintain their customer confidence rather than admit the full extent of cyber attacks. The study, however, indicates that ultimately, cyber-attack has an adverse, direct, and indirect effect on organisations. For instance, a cyberattack that halts the functioning of an organisation results in a loss of transactions for the period in which the attack lasts.

Another study by Radenliev et al. (2018) found that organisations often incur significant insurance against cybercrime because of the potential vulnerability of their network systems and the adverse economic effect an attack can have on an organisation. The study argues that when an organisation experiences a cyberattack and crucial client data or information is leaked or used inappropriately, the organisation becomes directly liable for the related consequences, including financial compensation. The reputational damage to the organisation can also cause adverse consequences, especially when private clients’ information is leaked.

Method and Implementation

Introduction

This section of the project outlines the processes and procedures that were used to generate or gather the information used to answer the research questions. The methodology section usually provides the detailed procedures and steps that were followed to determine the type of data to be used, the source of data, the data collection process, data analysis, presentation, and any ethical considerations necessary for the process. The methodology section is crucial in the research process because it provides the basis for the validity and reliability of a given study. The validity of a given research study is dependent on the details and procedures of the methodology (Patten & Newhart, 2017). Any wrong procedure of move-in designing the methodology could have a significant and adverse effect on the outcome of the study, which ultimately affects the validity and reliability of the research.

Validity is particularly crucial in research because it provides the basis for decisions on how to apply the study findings in a decision-making context. As Cohen et al. (2017) observe, the goal of the research is always to provide solutions to societal problems or to create knowledge regarding a specific natural or social phenomenon. It is, therefore, imperative that the validity and reliability of a study are established. In the context of the current research, establishing validity and reliability is important if the outcomes are to be used by a business organisation to design and improve the cyber security of their system. According to Patten and Newhart (2017), having a solidly reliable and reproducible research procedure enables other interested parties to use the same procedure to verify or refute the claims or findings of a given study. This chapter contains details regarding the study philosophy, the specific study design, the sample, the data and data collection procedures, and the data analysis process. The potential ethical considerations are also outlined in this section.

Research Philosophy

Research philosophy is a set of beliefs or principles used in the construction of knowledge or in the understanding and interpretation of information or findings of given research (Ryan, 2018). Research philosophy is important in determining the appropriate approach or design for given research. There are three common research philosophies –interpretivism, positivism, and pragmatism philosophies. Each of these is used according to the appropriateness of the research scenario.

In this study, the interpretivism philosophy was used. According to Scauso (2020), the interpretivism philosophy holds that there is no objective or universal approach to knowledge regarding a specific social phenomenon. The interpretivism philosophy’s underlying assumption is that humans exist dependently on the social phenomenon and interact with them. Therefore, the interpretivism philosophy holds that the researcher, or an individual seeking information regarding a given phenomenon, is always at liberty to use his or her knowledge, rationale, opinion, or perspective in the interpretation of a social phenomenon (Ryan, 2018). The interpretivism philosophy holds that the researcher should always take part in the process of data collection, analysis and interpretation while actively engaging his or her knowledge and experience.

According to Ryan (2018), the interpretivism philosophy assumes that knowledge is subjective and can be interpreted differently. The opposite of the interpretivism philosophy is positivism. According to Ryan (2018), the positivism philosophy holds that knowledge or information regarding a specific phenomenon is objective and that logic should always be used to determine the truth or understanding of the same. Logic, in this respect, is associated with the use of objective or quantifiable features of a given phenomenon to make inferences and conclusions.

The difference between the interpretivism and positivism research philosophies can best be established by examining a relevant example. Society has always portrayed families or marriages as a source of joy. According to Chapman and Guven (2016), it is traditionally held that people who are married are happier than those who are not. Such a notion could be subjected to research for confirmation or dismissal, in which either the interpretivism or positivism approach would be used. If an interpretivism approach is used, the researcher would be at liberty to use his or her knowledge, experience, intuition, and even opinion in the research process, which could arguably compromise the outcome. If, for instance, the researcher is married and has children, he could be inclined towards confirming that, indeed, marriage creates happiness. On the other hand, an unmarried, divorced, or separate researcher may have an alternative opinion. It implies that there is significant variation in the potential findings, which is mainly influenced by the researcher’s personal opinion or perception regarding marriage.

On the other hand, if a positivist approach is to be used in the example presented herein, the researcher would look for objective ways to determine people’s perceptions regarding marriage and happiness without necessarily engaging his or her personal perception. The positivism approach would require an objective or logical measure such as asking the respondents, on a scale of 1-10, whether they believe that marriage makes them happy or not. In such an approach, the outcome of the study is devoid of the researcher’s influence, which, therefore, enhances the study’s validity.

Considering the two approaches, it is evident that the positivism approach is more likely to yield more valid and reliable results compared to interpretivism. As Scauso (2020) observes, researchers are often inclined towards the positivism approach because it enhances the validity and reliability of the study outcomes. In most cases, positivism research is the framework on which quantitative empirical studies are based. On the other hand, the interpretivism approach is considered highly vulnerable to bias because of the active involvement of the researcher in the research process (Ryan, 2018). As such, the interpretivism approach is not always common, and in most cases, it is always the foundation of qualitative studies.

Despite the shortcomings of the interpretivism philosophy, as compared to positivism, it is commonly used in specific research scenarios where positivism would not be effective or cannot result in reliable and valid results that can be used in the decision-making process. According to Tamminen and Poucher (2020), the interpretivism philosophy is usually used in research cases where there is limited research or information that is necessary to develop hypotheses or propositions that can be tested through quantitative approaches. Quantitative studies based on the positive philosophy are always influenced by the availability of adequate or known variables of the study topic that can be measured using a specific quantitative approach. The current study does not have such features.

As the literature review revealed in chapter two, there is no commonly agreed-on standard or procedure for conducting a risk assessment of a cyber system. Also, it is apparent that different organisations use different risk assessment strategies that are most suitable to their specific scenarios. It is, therefore, almost impossible to use a positivism approach that seeks to measure the study variables logically. The most appropriate research strategy in this scenario is the use of a specific strategy that allows the researcher the liberty to use various approaches of data collection, analysis, and interpretation to answer the research question, hence the selection of the interpretivism approach.

Research Approach

The research philosophies mentioned above are the broader foundations of developing or designing a research design. As already mentioned, most qualitative studies are usually inclined toward the interpretivism approach, although there are some which are based on the positivism approach. On the other hand, most quantitative studies are based on positivist research philosophies (Patten & Newhart, 2017). The decision and determination of which approach is appropriate are dependent on the nature of the study. Since the interpretivism approach is the framework for the current study, the selected research approach is qualitative.

According to Esser and Vliegenthart (2017), a qualitative research approach utilises non-numerical data to answer the research questions. The qualitative research approach usually uses textual data to answer the research question in most cases. Within the research approach, there are always other specific research designs that detail how the study is designed and conducted.

Research Design

In this study, a qualitative content analysis research design was used. Content analysis is a research technique that uses data from multiple secondary academic sources to answer the research question (Patten & Newhart, 2017). It is common to confuse the content analysis design with two other secondary data study designs – the systematic review and meta-analysis.

A systematic review and meta-analysis research approaches use a specific laid down procedure for gathering specific evaluated secondary sources for inclusion in a study (Gaur & Kumar, 2018). A systematic review and metal analysis, for instance, usually employ a specific, systematic, and repeatable procedure which includes – scoping, planning, identification, screening, assessment, presentation of findings, and discussions. The two approaches employ a specific method that is rigorous and selective in determining the sources from which the data for answering a research question is obtained.

The content analysis is different since it does not always rely on such systematic and rigorous procedures. According to Bengtsson (2016), the content analysis gives the researcher adequate liberty to engage and include as many sources in the research study as possible. The use of different data sources is important, especially in a given research area or study where there is limited information.

The content analysis approach is relevant for the current study because of the ambiguous nature of information with regard to the selected topic. The literature review revealed that despite the numerous studies that have explored cyber risks, there is no specific strategy for cyber risk assessment. The ultimate goal of the current study is to use the secondary information to develop a cyber risk assessment plan that specific organisations can use. The research aims to at least develop a skeleton that can be used to explore the cyber risk for an organisation. It is, therefore, important to consult a wide volume of sources.

Ideally, empirical studies would have been appropriate because of their peer-review nature, which makes them more valid and reliable. However, the review of the literature indicated that few studies specifically focused on cyber risk assessment. The majority of the studies have concentrated on these systems’ security and threat vulnerabilities. There is, however, a large body of literature or information published by institutions, organisations, and experts regarding cyber risk assessment. Such data are crucial in developing a cyber assessment plan. Therefore, the content analysis is appropriate because it allows the researcher to consult a wide literature base in examining the research question. As Kleinheksel et al. (2020) argue, content analysis is always important in research since it allows the study to develop a different perspective that would otherwise be missing if strict research procedures such as meta-analysis, systematic reviews, or interviews were used.

According to Kleinheksel et al. (2020), the content analysis sometimes gives the researcher additional information or content that is not published or available in the peer-reviewed journals yet is important in answering the research questions. Expert and editors’ reports, for instance, are always up to date and provide an accurate reflection of the current situation in a study area. Such information may not be found in peer-reviewed studies, whose current findings are often a reflection of a previous time period, at least by six months or more. The journals go through a long publication process that involves submission, correction, and resubmission. It is possible that new information could have emerged between the time it takes to complete a study and publish in a specific journal.

Sampling Procedure

According to Berndt (2020), a research sample is the number of participants, research subjects, or sources from which the data are obtained. The concept of sampling particularly originated from human-based research. Depending on the scope of a research study, an ideal situation or condition for valid results would always be to get or gather data from all the research subjects or data sources. If, for instance, a study targets to explore disease epidemiology in a given area, the ideal condition would be to examine all the members of the targeted area. However, in a practice scenario, it is almost impossible to get data from all the members of the targeted population.

Therefore, the concept of sampling was developed mathematically to determine the representation of a given population that could be used to answer the research question. The assumption is that there is a specific number of the study subject or respondent that can be obtained from the larger population and that the data derived from the sample is a reflection of what would have otherwise been found if the study was performed on the whole population (Berndt, 2020). The concept of sampling mostly applies to empirical primary quantitative studies. However, sampling is also used to enhance the study validity in qualitative social studies. However, sampling procedures used in different research contexts vary significantly based on the nature of the study.

Often in most qualitative studies, particularly those involving secondary data, there is less emphasis on the sample because it is difficult to determine the relevant sample size (Yates & Leggett, 2016). For instance, in the current research, it is almost impossible to determine the number of sources that would be appropriate to make a conclusion. As such, qualitative studies usually do not have a specific strategy, especially the mathematical strategy of determining the appropriate sample size.

To avoid the problem of sample size determination in quantitative studies, the data saturation approach is usually used. According to Fusch and Ness (2015), data saturation is a point in the data collection process where the researcher determines that the collected qualitative or even quantitative data is sufficient to make the necessary conclusion or inferences. Data saturation is assumed to be reached when the collection of additional data to the study does not result in a significant result that is different from what has already been established.

In the example provided in this section, when exploring the research philosophy, data saturation in a qualitative study would be assumed to have been reached if further sources or articles indicate a specific direction regarding marriage and happiness. If, for instance, the researcher explores 12 articles and finds that most of them associate marriage with happiness, it would be illogical to look for an additional article if the 13th one also indicates the same trend as the other 12. At such a point, the researcher is at liberty to decide that adequate sources have been collected to answer the research question.

Data saturation is mostly used in qualitative interview research. According to Hancock and Amankwaa (2016), although interview research designs involve human participants, there is no specific procedure or method to determine the sample size. A general rule of thumb is to use fewer participants to avoid incidents of significant data variation. Variation in response can be very common in interview research. Even when different respondents mean the same thing, they are likely to use different words to explain it. As such, the researcher must encode the different responses and place them into the same theme, which increases vulnerability to bias. Therefore, it is always recommended that the sample size in interview design be kept small.

However, the major issue is how small the sample should be in qualitative interview research. Again, there is no specific procedure for determining the same. Therefore, the data saturation approach is very important in determining the appropriate sample size for qualitative interview design. However, the technique has since been transferred to other research designs, such as content analysis (Fusch & Ness, 2015).

Data Collection Procedure

Because the study is qualitative in nature, a desktop search approach was the main method or technique used to search and collect data for the study. Desktop research basically involves the search of various databases and potential data sites to gather the necessary data or information to answer the research question (Siddaway et al., 2019). Desktop research is similar to the procedure used to search data for systematic review or meta-analysis. It involves the use of internet search engines, guided by specific search terms to locate the necessary resources or articles for the study.

The search term is a fundamental element in secondary research procedures such as the current dissertation. According to Siddaway et al. (2019), the inability to use the correct search terms or combinations can make the research process very frustrating and tedious. Therefore, the author argues the importance of using the right search term combination to get the relevant articles to answer the research question.

In the current dissertation, the search terms used all corresponded to the topic in question. Also, the search terms corresponded with the research objectives. The goal was to use the search term combination to give the right articles to answer the research questions. Based on the study topic, the primary search terms used included “cyber risk,” “assessment,” “cyber security,” “vulnerabilities,” “threats,” “breach,” “cloud computing,” “the internet of things,” “cyberattacks,” “cyber threats,” among many other combinations.

According to Xiao and Watson (2019), it is almost impossible to predetermine with high accuracy the necessary search term to use in searching for articles to use for a given study. The author argues that the search predetermined search terms in numerous cases may not always yield the necessary results needed to answer the research question. For instance, the combination of the terms “cyber risk security issues” may not yield the necessary results appropriate for answering the research question. However, the use of the search term combination “cyber security threats and vulnerabilities” may yield a lot of appropriate, relevant results.

Therefore, Petrou and Madan (2018) argue that the researcher should always be willing to vary the search term as necessary to generate the relevant articles for answering the research questions. There is no specific trick or procedure for determining how to vary or use the search term. It is the researcher’s discretion on the same. However, the researcher must understand that not all the searchers are always going to yield the right sources or data.

When collecting secondary data for qualitative studies such as content analysis, it is important to have certain criteria that define the quality of data. The primary consideration in the current study is that all the information or the sources must be published in English. Considering the age at which cloud computing and the internet of things are, it was also decided that the search will be limited to the period 2005 – 2021. Also, only academic sources, peer-reviewed journals, books, and expert publications, were used as the sources of data. Blogs and any other non-academic sources were not used.

Data Analysis

A thematic analysis was used to analyse the data. According to Clarke et al. (2015), the thematic analysis technique in qualitative research is one that uses multiple qualitative data sources such as interview transcripts or secondary data sources to establish specific patterns with respect to the research questions or objectives. The goal of conducting a thematic analysis is to establish specific concurrence points of deviation depending on the data provided. For instance, one of the study’s objectives is to establish how cyber risk assessment should be done, which implies determining the necessary steps and stages of developing the security assessment system. The thematic analysis technique allows the researcher to examine multiple data sources and determine the specific issues or elements in which there is concurrence. For instance, if most studies indicate that the establishment of risk measures parameters should be the first step in the stage, that is what is reported in the results.

Braun and Clarke (2021) state that the thematic analysis uses specific attributes from the different qualitative data sources, often referred to as codes. The codes are specific words, phrases, or statements with similar meanings even though they have different wording or statements. For instance, one respondent may use the word “fear,” another respondent may use the word “sacred,” and another one may use yet a different word, “unwillingness.” Regardless of the word choices, all the respondents seem to have a negative attitude or perception regarding the specific phenomenon under study. The researcher, through thematic analysis, encodes these phrases or words to generate a theme which could be a “negative perception towards a phenomenon.” The essence of the data analysis technique, as the name suggests, is to reveal the specific themes represented in a set of data.

Ideally, thematic analysis was developed to analyse qualitative interview data (Terry et al., 2017). However, due to its usefulness, it is today used in analysing non-interview data, as long as they are textual. Nevertheless, in non-interview qualitative studies, the thematic analysis alone is not always sufficient. Clarke et al. (2015) argue that triangulation should always be added to the thematic analysis to gain more insight. Triangulation involves the use of multiple data sources to confirm an emerging theme. If, for instance, one source indicates that defining risks parameters is the first stage of cyber risk assessment, triangulation would be used to explore a different source to determine whether there is concurrence or differences.

Ethical Considerations

This research was based on secondary data, all of which are available publicly, although some require permission to access. Also, no specific organisation’s name or attribute was used in the study; hence, there were no major ethical considerations. The researcher, however, observed the ethical research practice of recognising authors of specific information through citations throughout the text.

Evaluation

Introduction

This section explores some of the cyber risk assessment models that have been reported in the literature to be effective in enhancing the security of computerised and technology systems. It is important to note that the data used herein are all secondary. Also, based on the evaluation, the study proposes an improved system that can be used to conduct a more efficient cyber risk assessment, but taking into consideration, the specific organisational or industry issues that may be important in establishing the cyber security system.

Evaluation Methodology

The qualitative assessment methodology is used to evaluate the existing cyber assessment systems. There is no specific risk assessment strategy, as the literature has already revealed. As such, there is no specific procedure for evaluating the cyber risk assessment plans. However, the study examines this dissertation using a qualitative methodology that examines the strengths and weaknesses of specific cyber risk assessment systems. The methodology also uses a balanced scorecard to determine the plans and strategies that may be better than others.

A balanced scorecard is usually important in evaluating and making system management decisions because of the approach it uses to assign weight and scores to specific operational areas. When designing a security system, for instance, specific elements are of priority. As Kuner et al. (2017) argue, data protection, for instance, is the primary reason for developing a robust cyber security system. As such, more weight should be assigned to this section. Schünemann and Baumann (2017) also argue that access to a network security system is yet another major priority area in designing a cyber security system. It should also be allocated enough weight, but not as much as the data protection. Therefore, the scoring of two or more different systems is performed, and the total score is used to determine the most suitable system herein.

Baseline Models Models

The two models proposed herein were determined through the literature review as the most commonly used in cyber risk assessment. The review indicates that they are the general approaches or frameworks for assessing cyber risk vulnerabilities. There are several other generic cyber risk assessment model, which is either based on the OCTAVE or TARA approach, hence the emphasis on examining the two frameworks.

Model One – OCTAVE

Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE) is a qualitative approach. The major steps involved in the OCTAVE include. Establish the criteria for measuring risks, developing an asset profile, identifying asset containers, identifying concerned areas, examining risks, and mitigating the risks.

TARA

The TATA approach focuses on predicting the most crucial vulnerability or exposure of the internet of things and provides a potential procedure for remediation. One of the major advantages of the TATA approach is that it breaks down the risks into smaller numbers that are manageable. This approach also improves the quality and effectiveness of risk evaluation and management, and lastly, it can enhance the expected risk management outcome due to high precision and targeted measures or remedies.

Evaluation Metrics

Data Protection

As already mentioned, the evaluation matrices are determined by the specific weight or importance of a given element of cyber security in the overall protection system. According to Craigen et al. (2014), information or data today is the most fundamental resource in computer network systems. Today’s most organisational processes are digitised, and they take in large volumes of data, which are either processed, stored, or monitored to aid functional business processes (Kaur & Kamkumar, 2021). Regardless of the purpose of a specific cyber security system, data protection is fundamental because the systems function based on the various information input. Such information may include the software’s source code or the company’s digital system. As such, the overall goal of cyber security, especially in business organisations, is to protect information stored therein. Data protection capacity is, therefore, the first metric.

Access Control

Entry into a given system is the most common method used to launch a malicious attack on a cyber security system. It is, therefore, imperative to have a cyber security system that is not prone to the risk or vulnerability of access (Thakur et al., 2015). Therefore, the second evaluation metric in order of priority is the access control risk.

System Upgrade and Maintenance

The system itself is a major source of a security breach, especially if it is not regularly updated. The third priority herein, therefore, is to focus on how system updates fit within the two cyber risk assessment models that are proposed herein.

Data Security Culture

Employees are an inherent factor in computer system protection because they are the primary users of the technology. Protection systems such as passwords are designed to ensure that employees have adequate control over security vulnerabilities and threats (Cavelty, 2010). Therefore, the employees of an organisation are expected to uphold security standards. Yet, there are numerous cases where the employees have always been the deliberate perpetrators of cyberattacks, or their vulnerabilities and carelessness have led to cyber-attack. Therefore, it is important to have a culture that encourages the employees to take part in computer system protection. The data security culture is, therefore, an important consideration herein. The weighted average scorecard is used to summarise these attributes.

Results

Table 1: Weighted Scorecard

Measurement Matrices

Weight

Raw Score (1 – 10_

Weighted Score

OCTAVE

TATA

OCTAVE

TATA

Data Protection

40%

10

9

4

3.6

Access Control

30%

9

10

2.7

3.0

System Upgrade and Maintenance

20%

9

9

1.8

1.8

Cyber Security Organizational Culture

10%

9

7

0.9

0.7

TOTAL

9.4

9.3

Table 1 above is the weighted scorecard that shows the rating of the two selected cyber risk assessment frameworks. The results show that the OCTAVE framework is more suitable for creating a more robust and secure cyber security system. According to the results, the OCTAVE system has a relatively higher rating in all the important matrices of cyber security, which indicates that it is an all-rounded cyber risk assessment model. On the other hand, the TATA has the highest rating with regard to access control and the lowest rating in terms of organisational culture that encourages cyber security. Therefore, a hybrid model that combines the two frameworks is proposed, as shown in figure 2 below.

Figure 2

The hybrid cyber risk assessment framework proposed herein incorporates the TARA framework at the risk identification stage of the OCTAVE model. In this model, the risk is determined through hacking and other infiltration methods to expose the core vulnerabilities of the system. The model is developed based on the literature findings, which indicate that cyber-attacks through hacking and malware introduction are the most common threats and vulnerabilities. The TARA framework has the best capabilities for uncovering such threats.

Discussion

According to the findings, the OCTAVE system is found to be a more effective cyber risk assessment strategy. Several attributes can be associated with the suitability of the OCTAVE system with regard to cyber risk assessment. The OCTAVE framework is preferred because of its robust nature in assessing risk, especially cyber risk, which the literature review indicated to be a rather complex issue considering the extent to which the cyber risk sources are spread, especially among the involved parties in the shared internet of things.

According to Wee et al. (2016), the key stages of the OCTAVE system include – the establishment of the risk measurement criteria, development of an information asset profile, identification of the information asset containers, identifying areas of concern, identifying threats, identifying risks, analyse risks, prioritise, mitigate, and evaluate. It is indeed important to define the risks to which a computer system is vulnerable. According to Wee et al. (2016), setting or developing the risk measurement criteria is an important step since it helps in defining what risk is to an organisation. Risks can be measured either qualitatively or quantitatively. Qualitative risks can be in terms of a description of specific criteria or elements that, when met, qualify to be a risk. An organisation, for instance, understands that the employees who have access to the institution’s network can be key to creating security breaches. As such, there are codes of conduct and practices for most workers in cyber security or computer system management. Therefore, the company needs to define employees’ behaviors regarding the computer security system that qualifies as risk.

According to Gillam and Foster (2020), workers can always dupe fellow employees into using their logins to gain access to unauthorised access to an organisation’s network. Through a qualitative approach, it is imperative for the organisation to define what counts as a security risk. For instance, a worker who logs into his or he computer without necessarily minding who could access the information may not qualify as a security risk if the organisation does not define the conditions under which employees should log in. Similarly, suppose the organisation does not prohibit sharing login information among employees, such as a worker in a different department issuing the IT technician with login details to gain access to a computer to fix a problem. In that case, such a scenario is classified as a risk only if the practice defies the provision.

Quantitative criteria can also be used to determine when to determine that a system is under threat or vulnerable to external attack. The number of malware detected and the number of firewall alerts, among others, could be used as quantitative risk assessment criteria. The role of the employees in the computer security system is reason enough to demonstrate that it is important to have an organisational culture that supports and enables the cyber risk security system that is put in place.

According to Hshim et al. (2017), the OCTAVE risk framework is suitable because it provides an all-around system of determining the threats and vulnerabilities, providing the mitigation measures, and further evaluating the mitigation measures taken in response to a specific risk. As Bolbot et al. (2020) argue, the risk mitigation or management process does not stop, regardless of the system that may be in place. As a result, the authors argue that effective risk management includes continuous monitoring of the risk cycle by redefining the criteria and repeating the process over and over within a specific period of time. Another study by (Aksu et al. (2017) holds that the OCTAVE risk management system’s iterative and repeatable aspect is the main reason it is most preferred for cyber risk assessment. The authors argue that through the repeatable feature of the framework, it is possible for a different assessor to perform the same risk assessment procedure and determine the specific threats and vulnerabilities associated with the system.

The TAT method ranks slightly lower than the OCTAVE framework. The main feature of this framework is that it focuses more on protecting access to the computer network security system. As a result, the TATA framework involves the examination of the existing cyber security system and determining some of the vulnerabilities to external or even internal attacks (Kandasamy et al., 2020). As a result, if the assessment establishes that external terminals such as independent users in different offices within an organisation are highly exposed, the security measures are directed toward protecting the vulnerable areas. According to Wynn et al. (2011), this method is effective and often preferred because it minimises the time and resources involved in the cyber risk assessment. However, the authors also argue that it is a highly vulnerable risk assessment system since it mostly depends on mock trials or white hackers to try to access the cyber security system. It is easier to direct the cyber risk security efforts towards a specific area through white hacking.

However, the major challenge with the TATA system is that it mainly focuses on hackers and the introduction of malware or ransomware into the cyber security system. in practice, however, there are several sources of security risk in the internet of things. The lack of focus on the organisational security culture, for instance, is a major shortcoming of the framework since some security breach occurs among the employees who are familiar with the organisation’s security system.

Conclusions

The internet of things has revolutionised communication in the 21st century. Today, communication and interaction have changed fundamentally due to the internet and digital technology changes. Functional processes in organisations have leverage on the new technology, which has created efficiency and effectiveness in business operations. Today, there is increased communication and interactions between business-to-business organisations and the departments within the organisations. The internet of things has particularly enhanced connectivity and the interoperability of devices. The need to provide adequate security to the information or data stored in the computer systems and the need to protect fundamental digital processes such as electronic health records and barcode scanning in hospitals have become of paramount importance.

Organisations in different fields and industries are adopting various cyber security systems depending on their convenience and need for security. Companies with crucial data that directly affects their services, for instance, require an adequate and robust cyber security system. However, no specific method or strategy can be used to establish the cyber security need of an organisation because of the aforementioned variation in security needs. There are also limited research studies that have focused on cyber risk assessment. Therefore, this dissertation was designed to explore the cyber risk assessment strategies and determine a suitable framework.

The study used a qualitative content research design. The study’s findings established that there are two basic frameworks of cyber risk assessment – OCTAVE and TATA. Either framework has specific advantages and disadvantages. A weighted scorecard was used to test the suitability of the two models. The results found that the OCTAVE approach was more effective for assessing cyber risk because of its robust nature. The results found that the OCTAVE model is capable of assessing cyber risk right from the beginning by setting the criteria for risk assessment to the evaluation of the methods used to mitigate a specific risk.

Regarding the TATA method, the study found that it is a suitable method of risk assessment when the practice targets a specific process. It is a more focused approach to identifying the potential risk sources and providing the necessary remedies. It is cost-effective and has a higher risk protection impact. However, it is too general such that it ignores some important risk vulnerabilities such as organisational culture, yet the employees or the people are the ones involved in creating and launching malicious attacks.

Recommendations for future work

Future studies should explore the extent to which the diagnostic features of the TATA framework can be incorporated into the OCTAVE system to have at least a standardised framework that has the capacity to incorporate the core security assessment of the computer network system and other technology

Future studies should also focus on specific organisations to explore how they conduct cyber risk assessments. Such studies should focus on studying multiple organisations to determine the trend of cyber risk assessment. Such studies would help to develop further a standard approach towards cyber risk assessment for different organisation

Future studies should also explore the effectiveness of some of the commonly used cyber risk assessment tactics, such as white hacking, and recommend their continued use in developing a cyber risk assessment framework.

Bibliography

Akinrolabu, O., Nurse, J.R., Martin, A. and New, S., 2019. Cyber risk assessment in cloud provider environments: Current models and future needs. Computers & security, 87, p.101600.

Aksu, M.U., Dilek, M.H., Tatlı, E.İ., Bicakci, K., Dirik, H.I., Demirezen, M.U. and Aykır, T., 2017, October. A quantitative CVSS-based cyber security risk assessment methodology for IT systems. In 2017 International Carnahan Conference on Security Technology (ICCST) (pp. 1-8). IEEE.

Asghar, M.R., Hu, Q. and Zeadally, S., 2019. Cybersecurity in industrial control systems: Issues, technologies, and challenges. Computer Networks, 165, p.106946.

Bengtsson, M., 2016. How to plan and perform a qualitative study using content analysis. NursingPlus open, 2, pp.8-14.

Berndt, A.E., 2020. Sampling methods. Journal of Human Lactation, 36(2), pp.224-226.

Bolbot, V., Theotokatos, G., Boulougouris, E. and Vassalos, D., 2020. A novel cyber-risk assessment method for ship systems. Safety Science, 131, p.104908.

Braun, V. and Clarke, V., 2021. Thematic analysis. Analysing qualitative data in psychology. London: Sage Publications Ltd, pp.128-47.

Bulao, J. 2022. How many cyber attacks happen per day in 2022? [Online] Available at: https://techjury.net/blog/how-many-cyber-attacks-per-day/#gref. [Accessed April 21, 2022]

Cavelty, M.D., 2010. Cyber-security. In The routledge handbook of new security studies (pp. 166-174). Routledge.

Chapman, B. and Guven, C., 2016. Revisiting the relationship between marriage and wellbeing: Does marriage quality matter?. Journal of Happiness Studies, 17(2), pp.533-551.

Clarke, V., Braun, V. and Hayfield, N., 2015. Thematic analysis. Qualitative psychology: A practical guide to research methods, 222, p.248.

Cohen, L., Manion, L. and Morrison, K., 2017. Validity and reliability. In Research methods in education (pp. 245-284). Routledge.

Craigen, D., Diakun-Thibault, N. and Purse, R., 2014. Defining cybersecurity. Technology Innovation Management Review, 4(10).

Dalenogare, L.S., Benitez, G.B., Ayala, N.F. and Frank, A.G., 2018. The expected contribution of Industry 4.0 technologies for industrial performance. International Journal of production economics, 204, pp.383-394.

Esser, F. and Vliegenthart, R., 2017. Comparative research methods. The international encyclopedia of communication research methods, pp.1-22.

Fusch, P.I. and Ness, L.R., 2015. Are we there yet? Data saturation in qualitative research. The qualitative report, 20(9), p.1408.

Gaur, A. and Kumar, M., 2018. A systematic approach to conducting review studies: An assessment of content analysis in 25 years of IB research. Journal of World Business, 53(2), pp.280-289.

Ghadge, A., Weiß, M., Caldwell, N.D. and Wilding, R., 2019. Managing cyber risk in supply chains: A review and research agenda. Supply Chain Management: An International Journal.

Ghobakhloo, M., 2020. Industry 4.0, digitization, and opportunities for sustainability. Journal of cleaner production, 252, p.119869.

Gillam, A.R. and Foster, W.T., 2020. Factors affecting risky cybersecurity behaviors by US workers: An exploratory study. Computers in Human Behavior, 108, p.106319.

Hancock, M.E., Amankwaa, L., Revell, M.A. and Mueller, D., 2016. Focus group data saturation: A new approach to data analysis. The qualitative report, 21(11), p.2124.

Hashim, N.A., Abidin, Z.Z., Zakaria, N.A., Ahmad, R. and Puvanasvaran, A.P., 2018. Risk assessment method for insider threats in cyber Security: A review. International Journal of Advanced Computer Science and Applications, 9(11).

Kandasamy, K., Srinivas, S., Achuthan, K. and Rangan, VP, 2020. IoT cyber risk: A holistic analysis of cyber risk assessment frameworks, risk vectors, and risk ranking process. EURASIP Journal on Information Security, 2020(1), pp.1-18.

Kaur, J. and Ramkumar, K.R., 2021. The recent trends in cyber Security: A review. Journal of King Saud University-Computer and Information Sciences.

Kleinheksel, A.J., Rockich-Winston, N., Tawfik, H. and Wyatt, T.R., 2020. Demystifying content analysis. American journal of pharmaceutical education, 84(1).

Kuner, C., Svantesson, D.J.B., H Cate, F., Lynskey, O. and Millard, C., 2017. The rise of cybersecurity and its impact on data protection. International Data Privacy Law, 7(2), pp.73-75.

Laukkanen, T., 2017. Mobile banking. International Journal of Bank Marketing.

Patten, M.L. and Newhart, M., 2017. Understanding research methods: An overview of the essentials. Routledge.

Petrou, S., Kwon, J. and Madan, J., 2018. A practical guide to conducting a systematic review and meta-analysis of health state utility values. Pharmacoeconomics, 36(9), pp.1043-1061.

Poritskiy, N., Oliveira, F. and Almeida, F., 2019. The benefits and challenges of general data protection regulation for the information technology sector. Digital Policy, Regulation and Governance.

Ryan, G., 2018. Introduction to positivism, interpretivism and critical theory. Nurse researcher, 25(4), pp.41-49.

Sarker, I.H., Kayes, A.S.M., Badsha, S., Alqahtani, H., Watters, P. and Ng, A., 2020. Cybersecurity data science: an overview from machine learning perspective. Journal of Big data, 7(1), pp.1-29.

Scauso, M.S., 2020. Interpretivism: Definitions, trends, and emerging paths. In Oxford Research Encyclopedia of International Studies.

Schünemann, W.J. and Baumann, M.O. eds., 2017. Privacy, data protection and cybersecurity in Europe. Springer International Publishing.

Siddaway, A.P., Wood, A.M. and Hedges, L.V., 2019. How to do a systematic review: a best practice guide for conducting and reporting narrative reviews, meta-analyses, and meta-syntheses. Annual review of psychology, 70, pp.747-770.

Sony, M. and Naik, S., 2019. Key ingredients for evaluating Industry 4.0 readiness for organizations: a literature review. Benchmarking: An International Journal.

Tamminen, KA and Poucher, Z.A., 2020. Research philosophies. In The Routledge international encyclopedia of sport and exercise psychology (pp. 535-549). Routledge.

Tan, S., Xie, P., Guerrero, J.M. and Vasquez, J.C., 2022. False Data Injection Cyber-Attacks Detection for Multiple DC Microgrid Clusters. Applied Energy, 310, p.118425.

Terry, G., Hayfield, N., Clarke, V. and Braun, V., 2017. Thematic analysis. The SAGE handbook of qualitative research in psychology, 2, pp.17-37.

Thakur, K., Qiu, M., Gai, K. and Ali, M.L., 2015, November. An investigation on cyber security threats and security models. In 2015 IEEE 2nd international conference on cyber security and cloud computing (pp. 307-311). IEEE.

Thames, L. and Schaefer, D., 2017. Cybersecurity for industry 4.0. Heidelberg: Springer.

Usmonov, B., Evsutin, O., Iskhakov, A., Shelupanov, A., Iskhakova, A. and Meshcheryakov, R., 2017, November. The cybersecurity in development of IoT embedded technologies. In 2017 International Conference on Information Science and Communications Technologies (ICISCT) (pp. 1-4). IEEE.

Walker, B., Popek, G., English, R., Kline, C. and Thiel, G., 2015. The LOCUS distributed operating system. ACM SIGOPS Operating Systems Review, 17(5), pp.49-70.

Wee, C., Bashir, M. and Memon, N., 2016. The cybersecurity competition experience: Perceptions from cybersecurity workers. In Twelfth Symposium on Usable Privacy and Security (SOUPS 2016).

Wynn, J., Whitmore, J., Upton, G., Spriggs, L., McKinnon, D., McInnes, R., Graubart, R. and Clausen, L., 2011. Threat assessment & remediation analysis (TARA): methodology description version 1.0. MITRE CORP BEDFORD MA..

Xiao, Y. and Watson, M., 2019. Guidance on conducting a systematic literature review. Journal of Planning Education and Research, 39(1), pp.93-112.

Yates, J. and Leggett, T., 2016. Qualitative research: An introduction. Radiologic technology, 88(2), pp.225-231.

4

Clarity on the Duty of Disclosure for Arbitrators and the need for

Clarity on the Duty of Disclosure for Arbitrators and the need for Harsher Penalties for Arbitrators failing to make Material Disclosures based on Supreme Court’s ruling on Halliburton Company v Chubb Bermuda Insurance Ltd [2020]

Name

Institution

Date

Introduction

The famous case of Halliburton Company v Chubb Bermuda Insurance Ltd [2020] UKSC 48, which had been eagerly awaited by the worldwide arbitration community, was decided by the UK Supreme Court on November 27, 2020. The decision clarifies the nature and scope of an arbitrator’s obligation to disclose circumstances and cases that could raise doubts about the arbitrator’s impartiality and independence, how this obligation interacts with the duty of privacy and confidentiality, and also the situations in which an arbitrator’s lack of disclosure could offer ascent to an appearance of bias. However, despite the clarification, there are issues related to more strict measures for arbitrators related to non-disclosure.

Brief Background

The case resulted from the Deepwater Horizon oil spill of 2010 in the Gulf of Mexico. The oil rig in question belonged to Transocean and was leased to BP, who hired Halliburton Company to provide sealing and well-monitoring operations. Transocean and Halliburton were both insured by Chubb Bermuda Insurance Ltd. In essence, Halliburton filed a compensation claim after the incident, which Chubb declined to pay. In the lack of contract between the two, the High Court assigned a supervising arbitrator (Mr. Kenneth Rokison QC) after Halliburton initiated arbitral proceedings, and the parties selected their nominations. The arbitrator had been Chubb’s option for presiding arbitrator, but Halliburton had dismissed it.

Following that, the arbitrator was chosen in two more arbitrations: Chubb’s delegate defending Transocean’s claim and another as substitute arbitrator in an assertion by Transocean against a different insurer. He did not inform Halliburton, a party to the arbitration, about the appointments. Halliburton learned of the appointments and expressed concern about the arbitrator’s omission to report a possible conflict of interest. Despite the arbitrator’s explanations and clarifications where he recognized that informing Halliburton of the selections would have been “prudent” and apologized for not doing so, Halliburton demanded that he quit. In the case in which the arbitrator had been assigned by the court and Chubb refused to retire, the arbitrator stated that he could not do so. The claimant filed an application using the Arbitration Act 1996 section 24 (1) (a), which grants the court the authority to dismiss an arbitrator if there arises any circumstances that raise reasonable doubts about the arbitrator’s impartiality.

Arguments

Clarity on the Duty of Disclosure of Arbitrators

The Supreme Court’s decision provides helpful clarity on a number of discrete points of English law, including the establishment of an English law legal duty of disclosure that is applicable to arbitrators, the responsibility of an arbitrator to disclose more than one meeting with the interrelated subject matter, and only one similar party in Bermuda Form arbitration agreements (unless the parties agree otherwise), and the assertion that consent to an arbitrator’s disclosure in the process of disclosure to several limited details about arbitration may be obtained from rules of an institution or/and the agreement signed for arbitration related to practice for the relevant field.

Furthermore, the court examined an issue that is often not thoroughly investigated in discussions on the matter: the interplay between the privacy of an arbitrator and confidentiality responsibilities and their duty of disclosing. The danger of secrecy norms from professionals or any other practice rules or professional conduct prohibiting disclosure is only mentioned in passing in the IBA Provisions on Conflicts of Interest. The court’s decision on inferred permission appears to be pragmatic, and it is feasible that it will be embraced more broadly if the matter arises elsewhere.

In terms of the judgment’s practical ramifications, the court’s obiter observation that a proposal to take on a second appointment that includes a shared party and interrelated issues is likely to necessitate notification of a possible conflict of interest seems to be sound counsel. The decision, however, emphasises the contextual and holistic nature of determining if information must be revealed and whether failing to disclose pertinent information forms an impression of bias. In this effect, any non-disclosure needs evaluation in the context of the situation. It is possible that the Supreme Court’s proposal that parties and institutions operating in such areas should clarify disclosure of more than one appointment or/and appointments originating out of the same nature of the content is redundant in their rules, or arbitration contracts will be taken up in particular areas as explained by LMAA and GAFTA. This would help distinguish between arbitrations in which such factors must be disclosed and those in which they are not.

While the High Court’s decision not to dismiss Mr. Rokison was upheld by the High Court, it definitely contemplated the vagueness of English law in connection to the disclosure obligations by the arbitrator as an essential factor in evaluating the failure disclose. Now that the issue has been settled, it is conceivable that an English court would not make a similar conclusion based on comparable evidence in the future. This highlights the need for timely disclosure.

Even if the facts to be given do not necessarily lead to an appearance of bias, the Court of Appeal’s decision establishes a duty of disclosure. Section 24 follows the common law criteria for apparent bias, whether a reasonable and knowledgeable observer would believe that the arbitrator is biased.  It is important to stress that the test is not whether an informed observer would decide the arbitrator was biased; instead, it is whether the arbitrator had a chance of being prejudiced. The Court of Appeal found that the situations in which an arbitrator must make a disclosure are broader, namely when a reasonable and knowledgeable observer would or might believe that the arbitrator is biased. In addition to “would conclude,” the inclusion of “may conclude” offers the possibility of a specific outcome. This appears to make the disclosure test referable to a fair-minded observer’s possible bias considerations. Arbitrators may not think this test is simply applicable.

If disclosure is required since a fair-minded spectator may suspect bias, the court determined that lack of disclosure will be a factor, but not a sufficient one, in deciding if an observer that is fair-minded would infer there was a real risk the arbitrator was prejudiced. However, a significant consideration will be how the arbitrator reacts to any issues raised by the parties as a result of disclosure or non-disclosure. Thus, facts that by themselves do not justify an inference of apparent bias may, when combined with an arbitrator’s later action, render the entire circumstances sufficient to establish an apparent bias determination. As a subject of English law, the judgment and result may be entirely reasonable and right, but the reasoning is convoluted, to put it lightly. The challenge for arbitrators is exacerbated by the court’s apparent agreement with Lord Woolf’s words in Taylor v Lawrence [2003] QB 528 pertaining to judicial disclosure in the arbitral environment.

As a result, an arbitrator is grappling with applying their disclosure requirements in practice, especially since the court made plain that their legal obligations may differ from their institutional standards and from what is considered acceptable international practice. However, the court did consider best practice in international commercial arbitration and the presence of the multiple appointment problem as important elements in reaching the finding that arbitrator M failed to disclose appointments that he should have declared as a matter of law. Was the court’s apparent broader approach to the legal duty of disclosure by the arbitrator impacted by the previously found perceived gap in the Arbitration Act 1996.

Could there have been Harsher Penalties for Arbitrators?

The problem with the Court of Appeal’s decision was that the law was largely uncontroversial in the first place. Still, the Court of Appeal’s credulous application of those principles caused doubt and anxiety among the arbitration community. It is probable that English law will take a stricter stance on unconscious bias. According to Lady Arden, the Parliament’s “wisdom” in enacting the 1996 Act left certain matters to judicial interpretation instead of codifying them in law. As a result, she noted that this field’s law could maintain pace with changes.  It can consider changing expectations and standards, specifically in international commercial arbitration. While the Supreme Court indicated certain areas for judicial improvements, such as what inquiries an arbitrator might make regarding likely conflicts of interest, it is hard to imagine the judiciary making vital improvements and progress in addressing unconscious bias’s complicated and widespread impact.

Given the law’s inherent limits in preventing the impact of unconscious prejudice, additional, more creative solutions are occasionally proposed in the global arbitration community. The elimination of party-appointed arbitrators in favor of appointments made only by arbitral institutions is an alternative that the Supreme Court recognized as a hotly disputed matter. Advocates for this viewpoint say that each party tries to pick an arbitrator that would likely assist them in winning the case. The parties’ expectations in this area may unintentionally influence an arbitrator’s decision-making process. Others have taken a more strident stance, arguing that arbitrators appointed by the involved parties are unprincipled, a moral hazard, and ill-convinced that should be eliminated or strictly regulated. The Court of Appeal’s assumption that it was an unintentional omission rather than a deliberate one was dubious. A deliberate omission would be an obvious indication of bias. On the other hand, an unintentional one can be just as harmful. Unconscious prejudice is almost always unintended, but it can still be a source of concern for a party.

The issue as to whether the non-disclosure ultimately did not reach the bar for blatant prejudice was one of many aspects of the court’s decision that were criticized as being confused and contradictory. The court held that an additional factor was required but did not specify, referring to “something more.” The court correctly said that information and situations known to the arbitrator that could also result in justified suspicions about his impartiality should be disclosed as of the date of its decision. However, the court’s rationale was particularly unhelpful. The court emphasized that the arbitrator would not have exhibited the “badge of impartiality” if a disclosure that should have been made had not been made. Mr. Rokison should have informed Halliburton at the moment of his appointments in this situation. This was a question of good practice and the law in international commercial arbitration. Yet, the court determined that an observer that is well-informed and fair-minded would not infer that he was biased. This was the case since the omission was unintentional rather than intentional, and there was only a slight overlap between the proceedings. As a result, Halliburton’s challenge was without merit.

The Supreme Court’s distinction between general or default arbitration and specialist arbitration in terms of disclosure is not without error. The Supreme Court somehow does not list those specialist forms of arbitration that should be “protected” from the default standards mandating disclosure. Therefore the borders between the two sorts of arbitration are unclear. Second, in the sense that every type of arbitration (oil, construction, telecommunications, gas, and others) is a specialist one, such broad or default arbitration does not exist. Therefore, the summa divisio between specialist and generalist arbitration is mainly rendered ineffective. This limitation can be evidenced in the present case where the Bermuda Form arbitration has been considered to be a specialist form of arbitration. Parties, however, disagreed as to the process of disclosure in such a form of arbitration. The Supreme Court ultimately found the existence of a duty to disclose given that “it has not been shown that there is an established custom or practice in Bermuda Form arbitrations by which parties have accepted that an arbitrator may take on such multiple appointments without disclosure.”

Another point of contention is whether it is reasonable to exempt certain types of arbitration from the requirement to disclose. The fact that the criteria for bias and disclosure may differ depending on the form of arbitration is uncontroversial. However, accepting a blanket disclosure exemption in all circumstances may lead to scenarios in which a party-appointed arbitrator will function as the party’s representative due to the intensity of pecuniary interests coming from several appointments. This would be in stark contrast to English case law’s emphatic rejection of such occurrences. To put it another way, the Supreme Court’s attitude on disclosure based on the context of specialist arbitration does not help make its stance on impartiality effective. Even if lower impartiality requirements for party-appointed arbitrators were to be adopted, it would still be acceptable to impose a duty of transparency on them to ensure that they can function impartially.

Conclusion

The Supreme Court’s decision has clarified the English law stance on arbitrator bias, creating a legal responsibility for disclosure in many situations. The appointed arbitrator was found not to have behaved unjustly or partly in this case. Nevertheless, according to the court, lack of clarity in English law regarding the duty of disclosure was a significant factor in its decision, implying that other arbitrators that do not disclose more than one appointment with the same party may be established in violation of their impartiality obligations and fairness and thus eliminated under the 1996 Act. Therefore, the decision of the Supreme Court attained an effective balance between a party’s autonomy as stated in the 1996 act and the creating a firm approach to the impartiality and fairness of the arbitrator.

However, while the court agreed that unconscious prejudice could be a factor in determining impartiality objectively, it also noted the difficulties of establishing and defending its influence. In keeping with prior rulings, the court advised against conducting a thorough investigation into whether or not an arbitrator was prejudiced instead of focusing on the legal criteria. In addition, numerous questions about the need to disclose will remain after Halliburton. If it is evident that disclosure is a legal obligation going ahead, the Supreme Court’s decision does not help determine what situations should trigger disclosure. Arbitrators are expected to continue to rely on the IBA Conflict of Interest Guidelines, which provide specific guidance on the conditions that must be declared. Uncertainties will also persist because this decision places a premium on custom and context when it comes to bias declaration and assessment. Also, the case reveals the need to rethink the law of arbitrators’ impartiality. A good starting point for the reform and clarification of this law is to keep in mind the standards and rules developed locally and internationally were crafted in contemplation of party-appointed arbitrators who, whatever the law or the Supreme Court proclaim; do not generally act as impartially as a judge or a chair of the arbitral tribunal would do. As a result, the impartiality of arbitral tribunal chairs (such as Mr. Rokinson) should be scrutinized more closely. It would benefit the parties and the arbitral justice system, which is effective because of the parties’ and national jurisdictions’ faith in the fairness of this private justice system. These issues prove that perhaps the Supreme Court should have imposed harsher penalties for the arbitrators that failed to make material disclosures that affect proceedings fairness.

Bibliography

Cases

Halliburton Company v Chubb Bermuda Insurance Ltd [2020] UKSC 48

Taylor v Lawrence [2003] QB 528

Statutes and Statutory Instruments

Arbitration Act 1996

Secondary Sources

Cartoni. B. ‘Arbitrator Bias: Have Halliburton and Sun Yang taught us what’s at stake for ADR?’ (2021) SCLA.

El Chazli, K. ‘The UK Supreme Court on Arbitrator’s Apparent Bias and Disclosure: Some Clarifications and Missed Opportunities: Halliburton Company v Chubb Bermuda Insurance Ltd [2020] UKSC 48,’ (2021), Civil Justice Quarterly, 2.

Helleringer G. and Ayton, P.’ Bias, Vested Interests and Self-Deception in Judgment and Decision-Making: Challenges to Arbitrator Impartiality’, in T. Cole, The Roles of Psychology in International Arbitration (2017) International Arbitration Law Library, 40, 37-38.

1 Business Research Methods Name Course Professor University City and State Date

1

Business Research Methods

Name

Course

Professor

University

City and State

Date

PART ONE

Aim and Research Questions

The study was conducted to investigate the adoption of self-checkout technologies by consumers in Singapore. This was a point of interest for the researchers, given that Asian countries had been slower in enhancing self-checkout counters than other countries. Additionally, the study aimed to establish the relationship between the use of self-checkout counters with various demographics, the evolution of these technologies, and the factors that enhance their adoption. The study was based on three research questions:

Are there specific demographic or psychographic segments more likely to adopt self-service technologies?

Can users’ evaluation of self-service technologies be used to predict adoption behaviour?

What are the situational factors that encourage or limit self-service technologies?

Survey Instrument

The survey instrument for this study was organized into three sections, targeting certain information from the respondents. The device itself was in the form of a questionnaire, and the sections were as follows:

First Section: This part aims at establishing whether a respondent prefers self-checkout counters by requiring them to rate how frequently they use these technologies. Additionally, this part also collected demographic information from respondents.

Second Section: A five-point Likert scale measures respondents’ perception of self-service counters in terms of their advantage over staffed counters, the ease of use, reliability, and entertainment value.

Third Section: Seven five-point Likert scale statements were used to investigate the likelihood of respondents using self-checkout

The second section of the survey instrument measured customers’ perception of the self-checkout counter using five-point Likert scale questions. The respondents’ perception of self-checkout counters was divided into five dimensions; Relative advantage over staffed counters, Perceived Complexity, Reliability, and Fun, measured using a three-item scale. In addition, this section also measured the compatibility of self-checkout counters with respondents’ lifestyles using a one-item scale.

The Cronbach alpha coefficients represent the internal consistency between a set of factors. These coefficients are used to determine how closely related items are as a group. In research studies, Cronbach’s alpha coefficients are primarily used when data is collected by using multiple Likert questions through the use of questionnaires and provide information on the reliability of the scale (Tavakol and Dennick, 2011). For this research paper, Cronbach’s alpha coefficients were used to determine the internal consistency of the Likert scale questions in measuring the Relative Advantage, Perceived Complexity, Reliability, and Fun of self-checkout counters about respondents’ perceptions.

Data Collection

The convenience sampling method was used to obtain data from shoppers and residents within the residential suburb. This sampling method involves selecting individuals that are most accessible to the researcher. Since the research topic centres on self-checkout counters, the researcher had easy access to shoppers in the identified research location. However, it is worth pointing out that this type of sampling method may not accurately represent the entire population. However, the data collected from respondents was random and independent since each respondent was issued with a questionnaire, and the respondents were randomly selected (Leng and Wee, 2017).

Findings

The researchers conducted chi-square tests for independence to determine if there was any relationship between demographic variables such as age, gender, and education using self-checkout counters. Results revealed that none of these demographic variables had a significant connection with self-checkout counters. Each test had a p-value that was less than the alpha level of significance.

The 4 in this report indicates the degrees of freedom for the test. That is the maximum number of independent values in the statistical test that have the freedom to vary. For a Chi-square test, the degrees of freedom are calculated by using the formula:

Effect size represents the magnitude of the experimental effect obtained after performing a statistical test. This means that the greater the effect size, the greater the association between two or more variables. However, the effect size is usually affected by the sample size. Hence the large sample size in the research led to statistical significance observed in each test despite the test having moderate statistical power (Fritz, Morris, and Richler, 2012).

Conclusion and Further Research

A study conducted by Patsiotis, Hughes, and Webber (2013) suggest that there may be different degrees of resistance or reluctance to adopting technology as the potential reasons why some people do not prefer using self-checkout technologies. Therefore, future research could investigate the differences between non-users of self-checkout counters and establish why they do not choose this technology.

PART TWO

Case Study 1: Amazon Fresh

Frequency table

A total of 87 participants took part in the survey to investigate the level of take-up for shopping Till-less. Out of the total participants, 32 responded by indicating that they would not consider shopping at AMAZON Fresh (36.8%), while the remaining 55 suggested that they would consider shopping at AMAZON Fresh (63.2%).

Research Question One

Null Hypothesis: The proportion of people who consider shopping at Amazon Fresh is less than or equal to 50%.

Alternative Hypothesis: The proportion of people who consider shopping at Amazon Fresh is greater than 50%.

The test is conducted at the 0.05 level of significance. The results reveal that the proportion of those who consider shopping at Amazon Fresh is 0.632 (n=55).

P-value

Decision

The p-value, 0.018, is less than the alpha level of significance. Therefore the null hypothesis is not rejected.

Conclusion

There is not enough evidence to support the claim that the proportion of people who consider shopping at Amazon Fresh is greater than 50%.

Research Question Two

Null Hypothesis: The proportion of people who consider shopping at Amazon Fresh is not different between those comfortable with mobile technology and those who are not.

Alternative Hypothesis: The proportion of people who consider shopping at Amazon Fresh is different between those who are comfortable with mobile technology and those who are not.

P value

Decision

Fail to reject the null hypothesis. The obtained p-value is less than the alpha level of significance.

Conclusion

There is no statistically significant difference between the proportion of people who consider shopping at Amazon Fresh and are comfortable with mobile technology and those who believe in shopping at Amazon Fresh but are not satisfied with mobile technology.

General question

The main reason for conducting hypothesis testing is to determine whether there is sufficient statistical evidence to confirm or disprove a belief or a claim regarding a particular parameter. In Research Question Two above, the hypothesis was whether the proportion of people who consider shopping at Amazon Fresh is different between those who are comfortable with mobile technology and those who are not. In this case, a visual comparison of the sample statistics would not be enough to accept or reject the hypothesis. Therefore, the hypothesis test is conducted to determine whether there is any statistical significance between the two proportions.

Case Study Two: Entrepreneurship Intention

Descriptive Statistics

A total of 134 students took part in the survey to determine whether the average entrepreneurship intention score would differ between those who studied for an entrepreneurship module and those who did not study for the module. 60 students did not learn for the module and had an average score of 20.917 (SD=2.625), while 74 looked for the module and got an average score of 22.230 (SD3.041). The median scores for the two groups of students were 21.00 and 22.00, respectively.

Hypothesis testing one

Null Hypothesis: There is no significant difference in the average scores for entrepreneurship intention between the students who studied for an entrepreneurship module and those who did not.

Alternative Hypothesis: There is a significant difference in the average scores for entrepreneurship intention between the students who studied for an entrepreneurship module and those who did not.

p-value

Decision

The p-value, 0.009 is less than the alpha level of significance; hence, we fail to reject the null hypothesis.

Conclusion

There is no statistically significant difference in the average scores for entrepreneurship intention between the students who studied for an entrepreneurship module and those who did not study for the module.

Hypothesis Testing Two

Null Hypothesis: The University department that one studies at does not affect the average score for entrepreneurship intention.

Alternative Hypothesis: The University department that one studied affects the average score for entrepreneurship intention.

P-value

Decision

The p-value of 0.001 is less than the alpha level of significance, 0.05. Therefore, we fail to reject the null hypothesis.

Conclusion

We fail to reject the null hypothesis. This means that there is not enough evidence to support the claim that university department affects the average score of entrepreneurship intention. Therefore, there is no significant difference in the average score of entrepreneurship intention regardless of the university department one studied at.

General Question

The p-value in hypothesis testing is used when deciding whether to reject or not to reject the null hypothesis. The value represents the probability of observing the obtained results if the null hypothesis were true. If the p-value is less than the alpha level of significance, we fail to reject the null hypothesis. If it is greater than the alpha level of importance, we reject the null hypothesis. The main limitation of the p-value in hypothesis testing is that it does not necessarily measure the likelihood of the hypothesis being actual nor the probability that the data was obtained by random chance alone (Greenland et al., 2016).

Reference

Fritz, C.O., Morris, P.E. and Richler, J.J., 2012. Effect size estimates: current use, calculations, and interpretation. Journal of experimental psychology: General, 141(1), p.2.

Greenland, S., Senn, S.J., Rothman, K.J., Carlin, J.B., Poole, C., Goodman, S.N. and Altman, D.G., 2016. Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations. European journal of epidemiology, 31(4), pp.337-350.

Leng, H.K. and Wee, K.N.L., 2017. An examination of users and non-users of self-checkout counters. The International Review of Retail, Distribution and Consumer Research, 27(1), pp.94-108.

Patsiotis, A.G., Hughes, T. and Webber, D.J., 2013. An examination of consumers’ resistance to computer-based technologies. Journal of Services Marketing.

Tavakol, M. and Dennick, R., 2011. Making sense of Cronbach’s alpha. International journal of medical education, 2, p.53.