top of page

Episode 1 - A Brief Introduction to Theory and Principles Transcript



The CyberEthics Podcast logo with "The CyberEthics Podcast" above it and "Episode 1 - Introduction" below it.

Hello and welcome to the Cyberethics podcast, I am Dr Michael Bruder.  


I have been teaching Cyebrethics for the past decade and am passionate about discussing these issues with experts and bringing those conversations to the wider public.


With the help of some professionals from various fields, we will be discussing the ways our increasingly digital lives raise old and new ethical concerns.  We will cover such topics as the dangers of Artificial Intelligence, privacy and surveillance, gaming, social media addiction, digital accessibility, cyber-education and many more.


This first podcast is intended to serve as a brief introduction to the moral frameworks and ethical principles that I will be making reference to as we discuss issues of Cyberethics.



First a note on terminology: what I am calling cyberethics is sometimes called computer ethics, information ethics, or even technology ethics.  These all refer to the same general scope of issues that arise when our technology brings us face to face with ethical concerns.  I will also be alternating between the term ethics and morals, these can be considered interchangeable for our purposes.


There are few areas of human endeavour that are growing as quickly as technology.  Each new development entails a new set of possibilities and these possibilities, in turn, present new challenges to our understanding of their moral implications.  What tends not to change quite so rapidly are our core values and the moral frameworks through which we assess the implications of our technological developments.  




In Philosophy, ethics is divided into normative and applied approaches.  Normative ethics studies broad questions such as: 


“what is the good for human beings”  


“Is there some quality that would make an action good in every context?” 


and “What ought we to be doing?”.  


In applied ethics, we are interested in how a theory guides our decisions within a specific field of action.  Cyberethics is the study of the application of these moral theories to issues related to technology and our online activities. In Cyberethics Philosophy courses, students discuss the moral issues surrounding these activities, usually through the consideration of particular examples or case studies.


The cases provide instances of some of the new ethical difficulties that arise with new digital possibilities.  For this reason, it is important to have a firm understanding of the moral perspectives that can be taken when evaluating such cases.  Moral frameworks provide us with a reference point from which we can evaluate the challenges that arise in this ever-changing environment.  Since students interested in cyberethics may not have a background in moral philosophy, it is prudent to provide an introduction to these theories in order to enrich the study of these issues and to present direction for further research and investigation.  It is the purpose here to provide a short introduction to moral frameworks, including a brief history of the origins and development of each theory and a consideration of the theory’s strengths and weaknesses.  In this podcast, common topics and cases within cyberethics will be discussed in light of moral theories as well as some moral principles.  


One way to think about the different moral frameworks is by breaking down a moral action into its constituent parts.  Roughly speaking, we can think of any moral action as being composed of: 

  • the person performing the action, 

  • the motivation for the action, 

  • and the results of the action.  

Accordingly, there are three major moral theories that focus on these three aspects of moral action: virtue ethics, which focuses on the person performing the action; deontology, which focuses on the rule or motivation for the action; and consequentialism, which focuses on the results of the action.  In addition to these three major moral theories, I will say something about moral relativism which is more of a metatheory (a theory about theories) and argues against the possibility of an objective account of what is right or wrong. But first, I will say a little bit about each of the three major moral theories current in normative philosophy:



First: Virtue Ethics.


For those who do not have a background in moral theory, the word “virtue” may have connotations of a prudish or Victorian cultural sensibility.  This is not the sense it has in ethical theory.  Its origin as a term in philosophy goes back to Aristotle (384-322 BCE) in Ancient Greece.  For Aristotle, a virtue is a way to excel as a human being.  Aristotle orients his moral philosophy around what is good for human beings and he determines that what is good for human beings is a certain kind of activity guided by reason (Nicomachean Ethics, 1098a5-8). Our activities express virtues, which are various ways of excelling or flourishing.  There are, generally speaking, two kinds of virtue for Aristotle, there are virtues of character and there are virtues of thought.  Virtues of character for Aristotle refer to things like bravery and generosity.  These are characteristics of actions, but also, and more importantly for Aristotle, these are characteristics of people. Virtues of thought allow us to figure out how to accomplish our goal, flourishing through virtues of character.


One of the distinctive aspects of virtue ethics is its concern with the internal state of the person performing the action; the moral agent.  On this model, it is not enough to do the right thing, to be brave in the face of danger, for instance.  What is required is the development of an internal state, a disposition, to be brave.  It is not enough to do what a brave person does, one must strive to be a brave person, and this means having the internal state of someone who behaves bravely when it is called for.  


Virtue ethics is a helpful moral framework for providing a reference point against which to check our goals but is sometimes criticized as being less helpful in providing specific instructions for action.  Virtue ethics gives us the guiding principles and an understanding of how to build character but leaves room for our practical reasoning (or the reasoning of those who are wise in the matter at hand) to determine what is appropriate in a given context.  One way of summarizing this position, is to say that we are trained to recognise the virtues we should exhibit, and then we use our practical wisdom to determine how to exhibit that virtue in a given context.


One common topic in Cyberethics is the types of relationships that we form through online communication.  A common question that is asked is “Can we develop and maintain real friendships through online interactions?”  Virtue Ethics provides a framework for approaching this question by referring us to our human ability to develop via certain kinds of friendships.  To approach this question through virtue ethics, we would evaluate how we flourish through in-person friendships and then analyze how online friendships affect that kind of flourishing.  To give one possible analysis, we might think that in-person friendships help us flourish because they reinforce certain values we approve of and also expand our experiential scope by introducing us to new things.  If this is a complete account of what a real friendship is, then an online friendship would be considered real if and only if it also fulfills these roles. 


Privacy is another issue in cyberethics for which virtue ethics may be particularly salient.  The question of how much privacy we should expect, or are entitled to, in the online world is a function of why that privacy is valuable.  Some people maintain that if we only value privacy to conceal wrongdoing then we may not be entitled to it at all.  However, a virtue ethics analysis may assert that privacy allows us to develop intimate relationships which clearly allow us to flourish as human beings.  With this understanding, one could argue that privacy is a necessary human good; necessary for human flourishing on a virtue ethics account.  Because virtue ethics requires the development of practical wisdom and depends on the individual to determine the correct course of action in light of this wisdom, this theory is sometimes criticised for not providing specific direction on what to do.



The second theory I’d like to discuss is Deontology.  Deontology provides a nice contrast with virtue ethics since, while virtue ethics focuses on the development and internal state of the moral agent, deontology is rather concerned with the motives and rules that govern actions.  It is sometimes claimed that, while virtue ethics is sensitive to context but cannot provide detailed direction, deontology provides very clear direction on what to do but is criticized as not being flexible enough to allow for contextual exceptions (Annas, 2015).  Contemporary deontological theories trace their origins back to the German philosopher Immanuel Kant (1724-1804).  Kantian morality focuses on whether an action is right or wrong; whether you have a responsibility to do, or refrain from, some action.  


The goal of a deontological theory, generally speaking, is to provide rules for acting that will be universally applicable.  Kant believes that our ability to be moral depends on our ability to be rational.  It follows from this, Kant argues, that our moral decisions will also be rational.  Kant maintains that, since reason is universal, the guiding principles of morality will also be universal.  Because of this, Kant concludes that a moral action would be one that each person could want everyone to do.  In other words, each rational person thinks it would be rational to do that action in every instance.  This is a paraphrase of what is called Kant’s categorical imperative, meaning that it is a command that applies everywhere.  Put loosely, if you can rationally wish everyone would behave that way, then that is the moral way to behave.  This universality can be seen as both a strength and a weakness of the theory. 


One of the often-debated questions in cyberethics is whether digital piracy, (the taking or distributing of intellectual property [IP] like a movie or song without paying for it) is always morally wrong.  One might argue that there are circumstances where digital piracy is acceptable, for instance, if piracy of the material does not have any noticeable effect on the IP rights holder.  On the deontological model, however, if piracy is stealing and stealing is wrong, then piracy is always wrong, regardless of the consequences or lack thereof.  This issue is further complicated in circumstances where one pays for a video streaming service but digitally masks one’s location in order to access content only available in other countries.  Some have claimed that the practise of Canadian Netflix subscribers masking themselves, via VPNs, as Americans to access additional content, constitutes stealing and is morally wrong.  A deontologist may have reasons to agree since this is a violation of the terms of service agreed to by the subscriber (breaking a promise), however an analysis that focuses on the effects of such a practise might argue that there is little to no harm caused, since these are paying subscribers, and the happiness of these subscribers as customers is significantly increased.  Such an analysis would be in line with the consequentialist approach.


As you might expect, consequentialism focuses on the effects, or consequences, of an action.  The most popular form of consequentialism is utilitarianism, and John Stuart Mill (1806-1873) is its most famous proponent.  Mill claimed that, in attempting to evaluate the morality of an action, one should consider its consequences.  Specifically, one should consider whether the consequences provide for the greatest happiness of those affected by the action.  Happiness here means pleasure, but not just the simple pleasures of the physical appetites, but also more meaningful pleasures such as we get from learning and accomplishments.  Utilitarianism is helpful when there are competing options and we need to know how to benefit the greatest number of people involved.  It is less helpful when it is unclear how to compare the competing values or if the objective is not to maximize happiness, but rather to address an imbalance or redress a wrong.  While virtue ethics is concerned with how an action expresses or develops the character of the agent, and deontology is concerned with the absolute rational morality of an action, consequentialism is primarily concerned with the effect of an action.  This means that an action that is morally correct in one context, may not be so in a different context, depending on how the relevant parties are affected.  


The issue of using facial recognition technology in the surveillance of public spaces raises issues of consent and privacy but is often justified in terms of its consequences of increasing public safety.  A utilitarian analysis would have to assess the overall effect on a population’s happiness, weighing loss of privacy and consent against a reduction in violent crime and theft.  To take another example, a deontological analysis may unequivocally condemn the stealing and sharing of classified information obtained from a government but a consequentialist approach would be open to assessing the effects of this information on the public.  Perhaps possessing this information allows citizens to work against perceived wrongs committed by that government.

 

While not as substantive a moral theory as the previous three, it is worthwhile to become familiar with the concept of moral relativism.  Relativism traces its roots back as far as the Ancient Greek philosopher Protagoras.  He is famously cited as claiming that ‘man is the measure of all things’ and this has been interpreted by some as meaning that the truth of things is simply as they appear to me.  If the wind feels warm to me, then the wind is warm, regardless of what anyone else claims.  This idea of the unassailable truth of my sensations is expanded in moral relativism to argue for the validity of individual or disparate accounts of what is morally correct.  Modern forms of relativism tend to focus more on the widely varying moral claims found in different theories and across different cultures and concludes from this that there is no universally valid moral framework.


The two most common forms of relativism focus on either the lack of agreement between and amongst cultures or on the lack of a universal criteria by which we can evaluate moral positions.  In the former case, the fact that we don’t all agree is taken as evidence that there cannot or should not be unanimous agreement on moral questions.  In the latter case, it is argued that there is no universally true judgement possible on moral questions and that, what may be morally right for one culture and at one time, may be morally wrong for another culture or at another time.  It is generally agreed that, as a normative ethical claim, relativism is self-defeating.  I.e. it is a universal claim about ethics that concludes there are no universally valid claims about ethics.  This position is articulated by authors such as Allen Wood.  A response to the claim that there is lack of agreement on moral concepts, is that there in fact is commonality among cultures in terms of the broad moral commitments we have.  The way these claims or principles are enacted may look different in different contexts and cultures but they reveal a common underlying moral commitment.  Julia Annas, in her book Intelligent Virtue, makes such a claim.  Using the example of bravery, she points out that brave actions can appear to be different or even opposed in varying contexts.  She gives the example of how a soldier may behave bravely in the conduct of war, but someone could also demonstrate bravery by protesting participation in an unjust war.  Both actions can instantiate bravery though they may appear opposite actions: participating in a war and protesting participation in a war. 


Moral relativism is problematic in a practical sense because, if one is convinced that the moral rightness or wrongness of an action is relative to the culture for whom it is an issue, one may decide that criticism or condemnation of foreign policies is unjustified and inappropriate.  For example, if a foreign government is heavily censoring the information that its citizens can access on the internet, it might be argued in accordance with moral relativism that there is no motivation or justification for intervening on behalf of those citizens.  This moral paralysis in the face of seemingly obvious injustice is one of the common practical criticisms leveled against moral relativism.  


While frameworks such as virtue ethics, deontology, and consequentialism are used in normative ethics to determine broadly what makes something moral, it is often helpful when dealing with applied ethics, or ethics in real-world applications, to focus on ethical principles.  Principles reflect values that are supported by the moral frameworks and offer more specific points of reference for guiding moral decision-making.


There are currently four principles that are used in the fields of applied ethics.  These were popularized by Philosophers Beauchamp and Childress in the context of applied medical ethics but are implemented in areas such as business ethics as well as computer, or cyberethics.  These principles are understood to be supported by the moral frameworks in various ways, and so while normative ethicists may debate which framework properly establishes the human good, when discussing specific contexts in cyberethics we can focus on whether these principles are upheld or have been violated.


The 4 principles of applied ethics are: Beneficence, Non-maleficence, Autonomy, and Justice.


Beneficence refers to the requirement to promote good through our actions.  In other words, the goal of ethical action should be to bring about some good or alleviate some harm.


Non-maleficence means to avoid causing harm through our actions.  


The principle of Autonomy refers to our responsibility to respect and encourage the ability for individuals to make decisions about their own lives, to recognise the right of individuals to be self-determining in their actions and thoughts.


The principle of Justice refers to the fair assignment and distribution of goods and also risks in our decision-making and actions.


If a principle is violated by an action, that action is considered unethical (all other things being equal).  If an action is seen to uphold, or be in accordance with all four principles, then we are generally considered justified in deeming that action to be an ethical one.  While these principles may not always provide specific guidance, according to AI ethics researchers Burr and Leslie, they “play a vital, contributory, and sometimes explanatory or justifcatory role in deliberation”.  In other words, we appeal to these principles to explain and justify why a particular decision or action is, or is not, ethical.


I should also mention that there is a movement to include a fifth principle, specific to the field of cyberethics.  This is the principle of explicability, sometimes called transparency.  This principle stipulates that the action being performed must be explicable, or understandable, to the people being affected by it.  This is increasingly important in the field of cyberethics, since, for example, if someone does not understand the data sharing policy they are agreeing to, one may say that the principle of autonomy is also not being met.  I cannot be self-determining in my actions, if I do not understand what my actions are committing me to.  For this reason explicability is being appealed to more often in cyberethics as a fifth principle or a supporting principle.


Where the ethical dilemmas arise is when these principles conflict: in instances where it does not seem possible to avoid violating all of the principles, or when two or more principles seem to conflict.


Let’s take an example:



If you receive a message that was intended for your friend, and it contains something personally hurtful about your friend, you might wonder if the right thing to do is to pass the message along to your friend, even though she may get hurt by the content, or decide not to show the message to your friend to spare her feelings.  Since it is intended to be your friend’s message it would seem that passing the message along would be the right thing to do.  This is respecting your friends autonomy since they would get to decide what to do about information that is being circulated about them.  However, not passing along the message would seem to align with avoiding causing harm to your friend (non-maleficence).  It would seem here that respecting one of the principles comes at the cost of another of the principles.


This can happen  frequently in cyberethics where it is difficult to determine what course of action would best instantiate the principles of applied ethics.


These three moral frameworks, which identify ways to assess the morality of the agent, motive, and consequences of an action are to provide guidance in our thinking about ethical behaviour, and these four principles provide further specificity to the values that we expect an ethical person or action to exhibit, or at least not violate.  In this way, we make appeals to frameworks and principles to assess the actions and policies of others and to justify and support our own ethical thinking and action.




This will have to serve as our brief introduction to moral frameworks and principles.  We will have the opportunity to reiterate and extrapolate on these concepts as we discuss various areas of application in the coming episodes.  A bibliography for further reading can be found in the episode notes.  


I hope you will join me again as we investigate the many ethical issues that arise in the course of our increasingly digital lives.


Bibliography: 

Annas, J. (2013). Intelligent virtue. Oxford University Press.

Beauchamp, T.L., Childress, J.F.: Principles of Biomedical Ethics,

7th edn. Oxford University Press, New York (2013)

Burr, C., & Leslie, D. (2022). Ethical assurance: A practical approach to the responsible design, development, and deployment of data-Driven Technologies. AI and Ethics3(1), 73–98. https://doi.org/10.1007/s43681-022-00178-0

Crisp, R. (2000). Aristotle: Nicomachean Ethics. Cambridge University Press.

KANT, I. (2021). Groundwork for the metaphysics of morals. DIGIREADS COM.

Mill, J. S. (2020). Utilitarianism. Bibliotech Press.

Wood, Allen. Relativism https://iweb.langara.ca/rjohns/files/2015/01/Allen_Wood.pdf

bottom of page