90+ Best Professional Ethics In The Workplace 2019
Ethics is knowing the difference between what you have a right to do and what is right to do. — Potter Stewart
I’ll wrap up our exploration of software development by spending some time talking about ethics and professional practice—that is, how should you act as a computing professional in situations that pose an ethical dilemma?
We’ll talk about what ethics means and how it applies to the software industry, what ethical theories exist that will give us some tools, what an ethical argument or discussion is, and how to evaluate ethical situations. Finally, we’ll go through three case studies to give you an idea of how these evaluation techniques will work.
Introduction to Ethics
Simply put ethics is the study of how to decide if something is right or wrong. Another way of putting this is that it is the study of humans figuring out how best to live.
For now, we’re going to assume that everyone knows what right and wrong mean, but as we’ll see in some of the examples, even this can be dicey at times. As for computer ethics, we’ll say it means those ethical issues that a computer professional (in our case, software developers) will face in the course of their job.
It includes your relationships with your colleagues, your managers, and your customers. Computer ethics also includes figuring out how to deal with situations where you must make critical decisions that can affect you, your company, and the users of your products.
For example, what if you’re asked to ship software that you think has a critical safety bug in it—or any serious bug?
What if your company is making illegal copies of software? What if your company is making it easy for others to illegally use or copy the intellectual property of others?
What do you do if you think you have a conflict of interest regarding a project you’re working on? What do you do if you’re offered a job developing software that you find morally objectionable?
What if you discover that your company is keeping track of everyone’s web searches or keystrokes?
Many decisions that are made in your professional life will have an ethical component. For example, the amount of data that a company collects from website visitors has one. Whether customers are made aware of the data collection also has an ethical component.
A decision to release software that allows users to convert files that use digital rights management (DRM) protection into unprotected files has an ethical component. A company’s treatment of its employees also has an ethical component.
The ethical situations you encounter in your professional life aren’t something that is categorically or substantially different from those that you encounter outside of your profession. You still need to examine these situations using general ethical principles and theories. That’s what we’ll start with.
Ethics is the study of what it means to do the right thing and how to do the right thing in different situations. Ethics is a huge branch of philosophy, and we won’t be able to cover more than a small part of it here. We’ll focus on just a couple of different theories and the tools those theories give us to figure out how to do the right thing.
First of all, “ethical theory is based on the assumption that people are rational and make free choices.” That assumption isn’t always true, obviously, but we’ll assume it is and that for the most part people are responsible for their own decisions and actions.
Ethical rules are rules that we follow when we deal with other people and in actions or decisions that affect other people. Most ethical theories have the same goal: “to enhance human dignity, peace, happiness, and well-being.”
We’ll also assume that the ethical rules from an ethical theory apply to everyone and in all situations. These rules should help to clarify our decision making and help lead us to an ethical decision in a particular situation. This is not as hard as it may sound at first. According to Sara Baase:
Behaving ethically, in a personal or professional sphere, is usually not a burden. Most of the time we are honest, we keep our promises, we do not steal, we do our jobs. This should not be surprising.
If ethical rules are good ones, they work for people, that is, they make our lives better. Behaving ethically is often practical. Honesty makes interactions among people work more smoothly and reliably.
for example. We might lose friends if we often lie or break promises. Also, social institutions encourage us to do right: We might be arrested if caught stealing. We might lose our jobs if we do them carelessly.
In a professional context, doing good ethically often corresponds closely with doing a good job in the sense of professional quality and competence. Doing good ethically often corresponds closely with good business in the sense that ethically developed products are more likely to please customers.
Sometimes, however, it is difficult to do the right thing. . . . Courage in a professional setting could mean admitting to a customer that your program is faulty, declining a job for which you are not qualified, or speaking out when you see someone else doing something wrong.
We’ll now explore some different ethical theories from two different schools, the deontological school, and the consequentialist school.
The word deontology is derived from deont, the Greek present participle stem of dei, meaning “it is right.” Deontologists believe that people’s actions ought to be guided by moral laws and that these laws are universal (and in some cases, absolute).
Deontologists emphasize duty and absolutist rules without respecting the consequences of the application of those rules.
Deontological arguments focus on the intent of an act and how that act is or is not defensible as an application of moral law. They usually do not concern themselves with the consequences of an act.
This school of the ethical theory comes out of the work of Immanuel Kant. Kant believed that all moral laws were based on rational thought and behavior. Kant stresses fidelity to principles and duty.
His arguments focus on duty divorced from any concerns about happiness or pleasure. Kant’s philosophy is not grounded in knowledge of human nature but in a common idea of duty that applies to all rational beings. One should do the right thing in the right spirit.
Kant contributed many ideas to deontological theory. Here are three of the most important fundamental ideas:
\1.\ There are ethical constants and rules that must apply universally: This is known as the categorical imperative or the principle of universality. In the simplest terms, the categorical imperative is a test of whether an action is right or wrong.
If you propose a moral law or rule, can your conception of that law when acted upon apply universally? “Can the action in question pass the test of universalization? If not, the action is immoral and one has a duty to avoid it. The categorical imperative is a moral compass that gives us a convenient and tenable way of knowing when we are acting morally.”
\2.\ You should always act so as to treat yourself and others as ends in themselves and not means to an end: That is, it’s wrong to use a person. Rather, every interaction with another person should respect them as a rational human being. “The principle of humanity as an end in itself serves as a limiting condition of every person’s freedom of action.
We cannot exploit other human beings and treat them exclusively as a means to our ends or purposes.” One can look at this as a re-statement of the traditional saying “Do unto others as you would have them do unto you.”
\3.\ Logic or reason determine the rules of ethical behavior: Actions are intrinsically good if they follow from logic or reason. Rationality is the standard for what is good.
Deontologists believe that it’s the act that’s important in evaluating a moral decision and that the consequences of the act don’t enter into determining whether the act is morally good or not. Kant takes an extreme position on the absolutism of moral rules. For example, take the moral rule It is always wrong to lie.
If a murderer is looking for his intended victim (whom you just hid in your basement) and asks where they are, according to the It is always wrong to lie moral rule it’s ethically wrong for you to lie to protect the intended victim.
In the real world, most people would agree that this is a circumstance where the ethical rule should be broken because of the consequences if you don’t. We’ll come back to this problem with Kant a little later.
As another example of a deontological argument and its problems, let’s assume that most of us have been involved in experiences where we’ve been torn between what we want to do and what we ought to do.
Kant says that what we want to do is of no importance. We should always focus on what we ought to do—in other words, we must do our duty. People who act in a dutiful way feel compelled to act that way out of belief and respect for some moral law. The moral value of an action depends on the underlying moral law.
In order to determine whether a moral rule is correct or good, we try to apply the principle of universality. Let’s work through an example of the application of the principle of universality: Keeping promises.
Say we’re in a difficult situation. In order to get out of that situation, we must make a promise that we later intend to break. The moral rule here would be I am allowed to make promises with the intention of breaking them later. Following the categorical imperative, we attempt to universalize this rule, so the universal version of the rule is:
It is morally correct for everyone in a difficult situation to make a promise they later break. If this is true, then promises become worthless because everyone would know they’d be broken later.
So there would be no such thing as a promise anymore. Hence, the moral rule that applies to me becomes useless when we try to universalize it. We have a logical contradiction: a promise is a promise except when it’s not a promise.
So this is how, when you’re analyzing an ethical dilemma, you apply the principle of universality. In this case, we discover that the rule we started with can’t be extended universally, and so it can’t be a moral rule.
Where are we with respect to deontological ethics? Well, we have a set of assumptions (or axioms) and a means of testing whether new, potential moral laws are correct or not (or right or wrong)—the principle of universality. How well does this work? Let’s try to formulate some pros and cons.
What’s good about the deontological approach to ethics?
It is rational: It’s based on the idea that rational humans can use logic to explain the why behind their actions.
The principle of universality produces universal moral guidelines: These guidelines allow us to make clear moral judgments.
All people are treated as moral equals: This gives us an ethical framework to combat discrimination.
What’s not so good about the deontological approach to ethics?
Sometimes no single rule can fully characterize an action: Example: I’m stealing food to feed my starving children. Although there is an ethical rule against stealing, there’s also an ethical rule that you should protect your children. In this case, these two rules are in conflict.
Deontological arguments don’t give us a way to resolve a conflict between two or more moral rules: Kant’s absolutist position on rules results in the idea that the deontological approach doesn’t tell us which rules are more important than others.
Given the example about stealing food for your starving children, there’s nothing we’ve seen in the deontological discussion on how to resolve this conflict of rules.
Deontological theories (particularly Kant’s) don’t allow any exceptions to the moral rules: This makes them difficult to apply in the real world, where we often need to bend the rules to avoid bad consequences. (But remember, the deontological theory doesn’t care about consequences; it cares about the act and the rule that the act embodies.)
If the deontological theory is flawed, is there another way to think about these ethical situations and reason about how to apply moral rules to solve them?
[Note: You can free download the complete Office 365 and Office 2019 com setup Guide for here]
Consequentialism (Teleological Theories)
There’s another way to think about these ethical situations and reason about them. In addition to thinking about the act, we can also think about the consequences of the act.
This is known as a teleological theory. Teleological theories derive their name from the Greek word telos, meaning “end” or “goal.”
Teleological theories give priority to the good over the right and evaluate actions by the goal or consequences that they produce, hence the name consequentialism. A consequentialist focuses only on the consequences of an act to determine whether the act is good or bad.
Utilitarianism is based on the principle of utility, which says that an action is morally good or right to the extent that it increases the total happiness (or utility) of the affected parties.
The action is morally wrong if it decreases the total happiness. This utility is the tendency of an action to produce happiness (or prevent unhappiness) for an individual or a group of individuals or a community.
An action might increase utility for some people and decrease it for others. This is where Mill’s aphorism the greatest good for the greatest number comes from.
According to utilitarianism, we must have a way to calculate the increase or decrease in happiness. This means we also need some common metric for how to measure happiness and we need to be able to calculate the total happiness or unhappiness of an action. This leads us to two variations on utilitarianism.
Act utilitarianism is the theory that an act is good if its net effect (over all the affected people) is to produce more happiness than unhappiness. Act utilitarians apply the principle of utility to individual acts and all the morally significant people that they affect.
For example, say the local county is considering replacing a stretch of very curvy highway with a straight stretch. We need to consider whether this is a good idea or not.
In order to do this, we must figure out who is affected by this new construction (who are the stakeholders) and what effect will the construction have on them (what is the cost). Say that in order to construct the highway, the county must take possession of 100 homes that the highway will cut through.
Thus, these property holders are stakeholders. The homeowners will be compensated for their property. Also, say about 5,000 cars drive on the highway every day; these drivers are also stakeholders because the new road may make their commutes shorter and they’ll thus have to buy less gas.
More broadly, the county is a stakeholder because it will have to maintain the road over a certain period, say 20 years, so there will be a cost for maintenance.
Even more broadly, there will be some kind of environmental impact because of the new road, and that must be calculated as well. If we use money as the measure of utility, then we can attempt to calculate the utility of building the road. Say that the homeowners are compensated with $20 million.
On the other hand, say that the car drivers incur a savings of about $2 each or $10,000 per workday for using the road, there are 250 workdays a year, and the road will last for 20 years. It costs the county $12 million to build the road and the environmental cost to animal species of lost habitat is calculated to be about $1 million.
So the total costs for the highway are about $33 million and the benefit to the drivers is about $50 million. Clearly, the road should be built, and the action is good.
While this example of act utilitarianism seems to work, there are several problems with it. We’ve not taken into account the unhappiness of the homeowners because some or all of them might not want to sell their homes. The impact on neighborhoods that may be divided by the new road is another cost.
The cost of maintenance over 20 years to the county is another, but the value of having fewer accidents on a straight road is a benefit, and so on.
So, it seems for act utilitarianism we need to take into account more things than just the costs involved in the proximate action. It doesn’t seem practical to perform this type of calculation on every ethical decision we have to make.
Act utilitarianism also doesn’t take into account people’s innate sense of duty or obligation and how they take these into account when making ethical decisions. It also forces us to reduce all ethical decisions to a positive or negative outcome—in our example, dollars. Finally, act utilitarianism leads us down the path to the problem of moral luck.
This is the problem where, when faced with an ethical decision, you don’t have complete control over all the factors that determine the ethical goodness or badness of an action.
The example Quinn uses for moral luck is of a dutiful nephew who sends his bedridden aunt a bouquet of flowers, only to discover that she is allergic to one of the flower species in the bouquet and ends up even sicker.
Because the consequences for the aunt were very negative, the action is morally bad, but the nephew’s intentions were good. Finally, it seems like an awful lot of work to do a complete analysis of costs and benefits for every single action we propose that has an ethical component, so act utilitarianism appears to be quite a lot of work. What’s the answer? Maybe we need to make some changes.
A variation of act utilitarianism is rule utilitarianism, which applies the principle of utility to general ethical rules instead of to individual actions. What we’ll do here is make the utilitarian calculation, but for a general ethical rule rather than for individual actions.
Simply put, “rule utilitarianism is the ethical theory that holds we ought to adopt those moral rules which, if followed by everyone, will lead to the greatest increase in happiness.” There’s that greatest good for the greatest number thing again. Let’s look at an example.
A computer worm is a self-contained computer program that exploits a security vulnerability, usually in operating system software, to release a payload that will normally do harm to an infected system and also to reproduce itself so it can propagate to other systems. On 11 August 2003, a worm called Blaster was released into the Internet.
Blaster exploited a buffer overflow vulnerability in the remote procedure call (RPC) subsystem in the Windows XP and Windows 2000 operating systems in order to access the system, release its payload, and propagate.
Microsoft had patched this vulnerability back in July 2003, but not all Windows users had applied the patch. In roughly four days, Blaster infected over 423,000 computers.
On 18 August 2003 a new worm, called Welchia, was released that exploited the same RPC vulnerability as the Blaster worm. However, when Welchia installed itself on a target system, instead of doing anything harmful it first looked for and deleted the Blaster worm if it was on the target system, downloaded the Microsoft patch for the RPC vulnerability, installed it, and rebooted the target system.
All copies of Welchia deleted themselves on 1 January 2004. The Welchia worm did all its work without the permission of the target system owner. In the computer security community, a worm-like Welchia is known as an anti-worm or helper worm.
The ethical question we have is: was the action of the person who released the Welchia worm ethically good or bad? If bad, what might they have done instead? Let’s analyze this ethical problem from a rule utilitarian perspective.
In order to analyze this ethical problem, we must create an appropriate ethical rule and then decide whether its universal adoption would increase the utility of all the stakeholders.
We first need a rule: “If a harmful computer worm is infecting the Internet, and I can write a helpful worm that automatically removes the harmful worm from infected computers and shields them from future attacks, then I should write and release the helpful worm.”
What would be the benefits? Well, clearly, every Windows user who had not already updated their computer with the Microsoft patch would benefit because Welchia deletes Blaster, installs the patch, and shields their computer from any further attacks by Blaster. A clear win.
What about harms? First of all, if everyone followed this rule, then every time there was a new malicious worm released, there would be a flood of helper worms also released.
This would probably slow down or clog network traffic. Also, how could network or system administrators figure out the difference between malicious worms and helper worms?
All they would see is a worm attempting to attack systems. So, the release of all the helper worms would reduce the benefit of using the Internet and other networks attached to it. Secondly, what if some of the helper worms contained bugs?
Not all helpful programmers are perfect, so there is a high probability that some of the helper worms would damage the target systems. This would decrease the usefulness of the individual computer systems and harm their owners.
Finally, the plethora of helper worms would create a large increase in the amount of work for network and system administrators, which would require overtime, or would cause them to not get other tasks finished, or both.
The harm caused by the ethical rule that allows the release of the helper worms seems to decrease the happiness or utility on the Internet rather than increase it. So, this ethical rule should not be created, and the actions of the person who released the Welchia worm are ethically wrong.
It seems like rule utilitarianism keeps the good parts of act utilitarianism but makes the overall calculation of ethical costs and benefits easier.
Because we use this theory on ethical rules, we also don’t have to recalculate the costs and benefits for every act. We’re also free to choose which rule we’ll enforce that can get us out of ethical dilemmas. Finally, it can eliminate the problem of moral luck. Rule utilitarianism seems like it could be the way to go. Except for one problem.
In both forms of utilitarianism, there is the problem that there can be an unequal distribution of good consequences across all of the stakeholders. This problem arises because utilitarianism only cares about the total amount of increase in happiness, not how it’s distributed across all the stakeholders.
For example, suppose acting one-way results in everyone getting 100 units of happiness, but acting a different way results in half the stakeholders getting 201 units of happiness each.
According to the utilitarian calculation, we should choose the second option because that will result in more total happiness, regardless of the fact that in the second option half the stakeholders get nothing. This doesn’t seem fair.
John Rawls tried to fix this problem by proposing two principles of justice. These principles say that when making ethical decisions, social and economic inequalities are acceptable if they meet the following two conditions:
(1) Every person in society should have an equal chance to rise to a higher level of social or economic standing, and (2) “social and economic inequalities must be justified.
The only way to justify a social or economic inequality is to show that its overall effect is to provide the most benefit to the least advantaged. ” This second condition is known as the difference principle.
It’s the difference principle that provides the justification for social policies like a graduated income tax, where those with more income pay higher taxes, and those with less income are entitled to more benefits from society. The two principles of justice are meant to ensure an overall level playing field when making ethical decisions.
In all ethical systems, there are a set of constraints and rules that help guide any ethical discussion. Discussing ethical issues in computing and software development is no different. We’ll look briefly in this section at two of these ethical drivers and how they relate to ethical problems in software development.
In all ethical discussions, we must remember to consider the law because laws constrain our actions and also guide us down ethical paths that society has decided are acceptable behavior.
This kind of legal drivers can include laws, including federal, state, and local, and government regulations (which are really interpretations of how the laws should be enforced). These laws govern areas like intellectual property, health and safety issues, privacy issues, and data protection.
Every profession has a set of ethical drivers that describe how members of the profession are expected to behave. Software development is no different.
The two professional societies of computing, the Association for Computing Machinery (ACM) and the IEEE Computer Society (IEEE-CS), have each developed and published codes of conduct for their members. Every software developer should adhere to these codes of conduct.
The two codes of ethics, the ACM Code of Ethics and Professional Conduct and the ACM/IEEE-CS Software Engineering Code of Ethics are both included at the end of this blog. I’ll let the ACM/IEEE-CS code’s preamble finish off this section. I’ve highlighted (italicized) particularly relevant sections.
Preamble to the ACM/IEEE-CS Software Engineering Code of Ethics
Computers have a central and growing role in commerce, industry, government, medicine, education, entertainment and society at large. Software engineers are those who contribute by direct participation or by teaching, to the analysis, specification, design, development, certification, maintenance and testing of software systems.
Because of their roles in developing software systems, software engineers have significant opportunities to do good or cause harm, to enable others to do good or cause harm, or to influence others to do good or cause harm.
To ensure, as much as possible, that their efforts will be used for good, software engineers must commit themselves to make software engineering a beneficial and respected profession. In accordance with that commitment, software engineers shall adhere to the following Code of Ethics and Professional Practice.
The Code contains eight Principles related to the behavior of and decisions made by professional software engineers, including practitioners, educators, managers, supervisors and policy makers, as well as trainees and students of the profession.
The Principles identify the ethically responsible relationships in which individuals, groups, and organizations participate and the primary obligations within these relationships.
The Clauses of each Principle are illustrations of some of the obligations included in these relationships. These obligations are founded in the software engineer’s humanity, in special care owed to people affected by the work of software engineers, and in the unique elements of the practice of software engineering.
The Code prescribes these as obligations of anyone claiming to be or aspiring to be a software engineer.
It is not intended that the individual parts of the Code be used in isolation to justify errors of omission or commission. The list of Principles and Clauses is not exhaustive. The Clauses should not be read as separating the acceptable from the unacceptable in professional conduct in all practical situations.
The Code is not a simple ethical algorithm that generates ethical decisions. In some situations, standards may be in tension with each other or with standards from other sources.
These situations require the software engineer to use ethical judgment to act in a manner that is most consistent with the spirit of the Code of Ethics and Professional Practice, given the circumstances.
Ethical tensions can best be addressed by thoughtful consideration of fundamental principles, rather than blind reliance on detailed regulations.
These Principles should influence software engineers to consider broadly who are affected by their work;
to examine if they and their colleagues are treating other human beings with due respect;
to consider how the public, if reasonably well informed, would view their decisions;
to analyze how the least empowered will be affected by their decisions;
and to consider whether their acts would be judged worthy of the ideal professional working as a software engineer.
In all these judgments concern for the health, safety and welfare of the public are primary; that is, the “Public Interest” is central to this Code.
The dynamic and demanding context of software engineering requires a code that is adaptable and relevant to new situations as they occur. However, even in this generality, the Code provides support for software engineers and managers of software engineers who need to take positive action in a specific case by documenting the ethical stance of the profession.
The Code provides an ethical foundation to which individuals within teams and the team as a whole can appeal. The Code helps to define those actions that are ethically improper to request of a software engineer or teams of software engineers.
The Code is not simply for adjudicating the nature of questionable acts; it also has an important educational function. As this Code expresses the consensus of the profession on ethical issues, it is a means to educate both the public and aspiring professionals about the ethical obligations of all software engineers.
Ethical Discussion and Decision Making
Given all the theories we’ve looked at, how do you actually make a decision when faced with an ethical problem? Here’s one process that can be followed. Divide the process into two parts: identifying and describing the problem, and then analyzing the problem and coming to a decision.
Naturally, you can alter the steps here and do them in a different order. Change the process to one that fits your particular ethical situation and interests. Here are the steps:
Identifying and Describing the Problem
\1.\ Write down the statement of the ethical problem. This will help to clarify what exactly you’re talking about.
\2.\ List the risks, problems, and possible consequences.
\3.\ List all the stakeholders. This will include you and anyone else involved in the ethical situation and anyone involved in the consequences of the decision.
\4.\ Identify all the basic ethical issues in each case. Try to establish the rights and wrongs of the situation and figure out what ethical rules might be involved.
\5.\ Identify any legal issues. This includes intellectual property issues and health and safety issues.
\6.\ List possible actions if the problem is more complex than a simple yes/no.
Analyzing the Problem
\1.\ What are your first impressions or reactions to these issues? What does your moral intuition say?
\2.\ Identify the responsibilities of the decision maker. This involves things like reporting ethical problems if you’re an employee and what your responsibilities might be as a manager.
\3.\ Identify the rights of the stakeholders.
\4.\ Consider the consequences of the action options on the stakeholders. Analyze the consequences, risks, benefits, harms, and costs for each action considered.
\5.\ Find the sections of the SE Code and the ACM code that pertain to the problem and the actions. This will help you with the ethical rules and in laying out the situation so you can consider alternatives.
\6.\ Consider the deontological and utilitarian approaches to the problem. You’ll need to have the ethical rules you’ve considered in front of you, as well as the sections of the SE and ACM codes of ethics. Then run through our examples here of other ethical situations and then follow those examples for your own situation.
\7.\ Do the ethical theories point to one course of action? If more than one, which one should take precedence? List the different courses of action and then, if necessary, try to prioritize them. This will help you think about different courses of action.
\8.\ Which of the potential actions do you think is the right one? Pick it. If you’re using a utilitarian approach, you might consider picking a metric and seeing if you can measure the effects of the decision.
\9.\ If there are several ethically acceptable options, pick one. Reflect on your decision.
This section will present four short case studies that illustrate the types of ethical problems you might encounter as a software developer. These case studies will cover ethical situations involving intellectual property, privacy issues, system safety issues, and conflicts of interest.
Your job is to analyze each case study, identify the ethical issues, and propose a course of action. Be aware that there may not be one “right” answer to the particular ethical problem.
#1 Copying Software
Elon Symons teaches mathematics at an inner-city high school in Chicago. Like many rural and inner-city high schools, Elon ’s has very little money to spend on computers or computer software.
Although her students do very well and have even placed in a statewide math competition, many of her students come to high school woefully underprepared for high school mathematics, so Elon and her colleagues spend quite a bit of time on remedial work.
Recently, a local company has offered to donate 24 iMacs to Elon ’s high school. It’s been decided that a dozen of these computers will be used to create a mathematics computer lab specifically to help the students with remedial work in pre-algebra, algebra, geometry, and trigonometry.
Elon wants to use a software program called MathTutor for the computer lab, but a site-wide license for the titles she wants is around $5,000—money that her school just doesn’t have.
The high school already has one copy of MathTutor, and there’s no copy protection on the program. Elon ’s department chair has suggested that they just make copies of the program for the new computers.
Elon doesn’t think this is a good idea, but she’s desperate to use the new computers to help her students. What should Elon do? What are the ethical issues here?
#2 Who’s Computer Is It
At Massive Corporation, you’re a software development manager. A developer on one of your software projects is out sick. Another developer asks that you copy all the files from the sick developer’s computer to his computer so he can do some important work. What should you do? What are the ethical issues here?
#3 How Much Testing Is Enough
You’re the project manager for a development team that’s in the final stages of a project to create software that uses radiation therapy to destroy cancerous tumors.
Once set up by an operator, the software controls the intensity, duration, and direction of the radiation. Because this is a new piece of software in a new product, there have been a series of problems and delays.
The program is in the middle stages of system testing, and the routine testing that’s been done so far has all gone well, with very few software defects found. Your project manager wants to cut the rest of the testing short in order to meet the (updated) software delivery deadline. This will mean just doing the routine testing and not doing the stress testing that’s scheduled.
You are trying to decide whether to ship the software on time and then continue the testing afterward, shipping patches for any defects found. What are the ethical issues here?
#4 How Much Should You Tell
You’re a principal in the J2MD computer software consulting company. One of your clients, the City of Charleston, South Carolina, wants your company to evaluate a set of proposals for a new administrative computing system and provide a recommendation to the city on which proposal to accept.
The contract for the new system would be worth several million dollars to the winning company. Your spouse works for Lowcountry Computing, one of the bidding companies, and she’s the project manager in charge of writing their proposal to the city. You have seen early copies of her proposal and judge it to be excellent.
Should you tell the project manager in the City of Charleston about your spouse’s employment at LowCountry Computing? If so, when, and how much else should you reveal?
The Last Word on Ethics
Every software development professional will encounter ethical problems during the course of their career. How you handle those ethical situations will say a lot about your professional behavior and moral character.
To wrap up this discussion of professional practice, let’s look at one more list of fundamental ethical principles that you should carry with you throughout your career. The original list comes largely from Quinn, and has been modified:
\1.\ Be impartial: You will have some amount of loyalty to your company, but you also must have loyalty to society as a whole and to yourself. Make sure you remember that.
\2.\ Disclose information that others ought to have: Don’t hide information from people who need to know it. Don’t be deceptive or deliberately misleading. Make sure you disclose any conflicts of interest.
\3.\ Respect the rights of others: This includes intellectual property rights, civil rights, and other property rights. Don’t steal intellectual property or misuse others property (for example, by denying access to systems, networks, or services, or by breaking into other systems).
\4.\ Treat others justly: Don’t discriminate against others for attributes unrelated to their job. Make sure that others receive fair wages and benefits and credit for work done.
\5.\ Take responsibility for your own actions and inactions: Take responsibility for everything you do—or don’t do—whether good or bad.
\6.\ Take responsibility for the actions of those you supervise: The old saying “The buck stops here” applies to you as a manager as well. This also includes making sure you communicate effectively with your employees.
\7.\ Maintain your integrity: Deliver on your commitments. Be loyal to your employer (as long as they also operate in an ethical manner). Don’t ask someone to do anything you wouldn’t do yourself.
\8.\ Continually improve your abilities: Software development and the computer industry as a whole are in a constant state of flux. Tools and languages you used in college will be obsolete five years later. Make sure you’re a life-long learner.
\9.\ Share your knowledge, expertise, and values: The more experience you acquire in your profession, the more you’re obligated to share your knowledge and expertise with your co-workers and subordinates. You should also set an example for others by living these values.