Ethics: Part 2
Issues: Overview
What are some of the areas of computing and technology where ethical
issues have arisen?
- Networking
- E-mail and spam (including spam prevention attempts)
- WWW and free speech issues (pornography, censorship, controlling
access to certain content by children, for example)
- Chat-room predators (ethical issues in police "sting"
operations)
- Viruses, worms, trojan horses
- Computer break-ins, denial of service attacks
- Intellectual Property Issues
- Intellectual property rights (copyright, patents, trademarks)
- Fair use doctrine
- Technology like Digital Rights Management, Copy-protected CDs,
etc
- Peer-to-peer file-sharing networks
- Privacy
- Technology for information gathering/tracking (spyware, cookies,
RFIDs)
- Data mining
- Identity theft
- Encryption
- Computer Reliability
- Applies to software and hardware systems
- Important issue for developers
- Software warranties/disclaimers
- Who is responsible when a system fails and results in damages
(monetary or otherwise)?
- Professional Ethics (for programmers / software engineers / technology
professionals
E-mail and Spam
- Spam: Unsolicited bulk e-mail. Accounts for a large percentage of
e-mail traffic nowadays
- Most of us hate spam
- Does our hatred of spam make sending spam ethically wrong? (No)
- What about free speech? Does spam count?
- Effects of spam
- Uses up handling resources (routers, etc) and storage
- Potential vehicle for viruses, trojan horses
- Low-cost alternative to snail mail marketing. Low response
expected (even 1 in 100,000 can be profitable)
- Nuisance to most recipients
- Common tactics used by spammers
- Obtain e-mails through submissions (online pages, warranty cards,
purchase from other companies), web page searching (web crawler
bots), random e-mailing
- Disguise subject line and content to bypass filters
- Spoof source computer IP address and even sender address
Ethical evaluations of Sending Spam
- Kantian analysis
- Sending to huge audience to reach tiny fraction of people actually
interested. Most recipients not interested
- Universality: Everybody being allowed to use others resources
against their wishes?
- Use of people as a means to an end -- bad
- Conclusion: Spamming is morally wrong
- Act Utilitarianism analysis
- Where to define parameters? Whose "utility" do we look at?
- Suppose spam message to 100 million people
- 1 in 100,000 responds with a purchase
- 1 entrepeneur makes lots of money -- he is happy (with high
intensity)
- 10,000 customers -- possibly happy (depends on customer
satisfaction) -- lower intensity
- 99,990,000 people unhappy -- much lower intensity per person
- Bottom line: Spam annoys probably 99.99 percent of the population,
makes these people unhappy. Probably safe to say spamming is
wrong
- Rule utilitarianism analysis
- General case similar to act utilitarian analysis above
- Overall drop in usefulness of e-mail system -- clogged resources
- Spamming wrong
- Social Contract theory analysis
- Right to free speech. Can send e-mail to anyone you want
- People also have a right to not listen (conflicting rights)
- Spammers are not up front -- disguise identity, content, attempt to
bypass your right not to listen (attempts to filter, etc).
- Because this is not above board, spamming wrong
- Pretty nice consensus, yes? (It's not always this easy)
- How would analyses change if e-mails all had valid subject and sender
ID information?
Example of combatting spam: MAPS
- MAPS = Mail Abuse Prevention System
- California-based organization dedicated to reducing spam
- maintains a Real-time Blackhole List (RBL) of networks that forward
spam or allow it to be created
- RBL made available to third parties. Network admins can use to check
mail coming into their system (against the list)
- E-mail marketers blacklisted based on absence of standard practices
(like opt-in only, ability to unsubscribe, removal of invalid
addresses, disclosure of practices)
- Opponents protest it restricts free speech
- If an entire ISP is blacklisted, valid e-mails could get blocked
- Kantian evaluation:
- Suppose rule is "It is right to blacklist ISPs where spam is created
so that they will change behavior, drop customers who send spam,
etc"
- This uses innocent ISP customers as a means to an end. Violates
Categorical Imperative
- Conclusion: The blacklist (RBL) is wrong
- Utilitarian evaluation:
- If an ISP uses the blacklist, users benefit by not receiving as much
spam, and better network performance
- Some users might suffer because valid e-mails dropped, and harder to
send to certain domains
- Full analysis would need to decide the net benefit to customers of
ISPs using the blacklist, weigh against the negatives
- Social contract theory evaluation:
- MAPS assumption -- e-mail should have equal benefit for sender and
receiver
- So, no sender has a right to expect an e-mail to be delivered
(gives sender greater benefit)
- Network admin has right to refuse to accept a piece of e-mail,
because it consumes network resources
- MAPS doesn't force anyone to use blacklist. System administrators
can decide whether to use the information
- From this standpoint, blacklist is okay
Intellectual Property
Here is a Powerpoint notes set used previously
in Intro to Computer Science, put together by Daniel Chang. This set
covers intellectual property issues (copyright, patents, fair use, etc)
Software Development/Quality
Even small errors in systems/software can lead to bad results
- Monetary results
- Data loss results
- Loss of productivity/time
- Safety issues
Some notable computer failures
- Patriot Missile
- During Gulf War (1991), it was supposed to shoot down incoming Scud
missiles
- Scud missiles were not reliable -- so most failed on their own
- Feb 25, 1991 -- a Scud hits U.S. Army barracks, killing 28. The
Patriot system never even fired at it
-
Failure at Dhahran
- Traced to a software error. Missile battery detected the Scud, but
it was supposed to check multiple times to avoid false alarms. Flight
path prediction computation was slightly off due to floating point
rounding errors in the system's clock, which increased with the
amount of continuous run time (clock drift).
- If operating a few hours at a time, it was more accurate. This one
had been running for 100 hours continuously
- Ariane 5
- Satellite launch rocket, designed by French space agency
- Maiden flight (June 1996), blew up about 40 seconds in
- Traced to a software error, a piece of code that converts a 64-bit
float to a 16-bit signed int
- This was built without an exception handler, because it was
sufficient for Ariane 4 (overflow would never occur given the
range of values used)
- Same software ported into Ariane 5. Overflow occured, no error
handler. Kaboom.
- Was carrying $500 million worth of uninsured satellites.
- Mars
Climate Orbiter
- $125 million system
- Destroyed by missing orbit, crash and burn in Mars atmosphere
- Navigation error caused by bad communication in the software
design.
- Flight operation software on the ground (Colorado team) used
English units (output thrust in foot-pounds)
- Navigation software (Jet Propulsion Lab in California) expected
metric input (newtons)
- Ground team provided thrust info -- navigation team relayed it to
spacecraft. It was 4.45 times too much.
- A few months later, the $165 million Mars Polar Lander was also lost,
probably crashed. Believed to be a a software error
- Therac-25
- Radiation therapy machine
- At least 5 patients died from massive radiation overdoses
- Caused by both hardware and software errors
Moral responsibility
- Should developers/managers be held morally responsible for
catastrophic failures? (Especially in the case of fatal results)
- A moral agent is considered responsible for a harmful event if these
two conditions hold:
- Causal condition: The actions (or inactions) of the agent must have
caused the harm
- Mental condition: The actions (or inactions) must have been intended
or willed by the agent
- In the case of discovered flaws (like in the Therac-25), causal
condition easy to prove
- Mental condition harder -- surely the developer didn't intend
harm?
- Some philosophers extend the mental condition to include unintended
harm resulting from carelessness, recklessness, or negligence
- Certainly software developers have a moral responsibility to do their
best to ensure correctness and quality in their products. This is
especially true when failures could lead to catastrophic results
Software Code of Ethics
- Endorsed by ACM and IEEE-CS
- A practical framework for moral decision making related to software
engineering issues
- Software Engineering
Code of Ethics and Professional Practice
- Fundamental principles
- Be impartial -- the good of the general public is equally important
to the good of organization, company, or self
- Disclose information -- don't conceal information that could lead to
harm, disclose potential conflicts of interest, don't make misleading
statements
- Respect the rights of others -- privacy, property, etc
- Treat others justly
- Take responsibility for your actions or inactions
- Take responsibility for the actions of those you supervise
- Maintain your integrity
- Continually improve your abilities
- Share your knowledge, expertise, and values