using python to access web data week 4 assignment

Programs that Surf the Web (Chapter 12)

Reading Web Data From Python

1. Which of the following Python data structures is most similar to the value returned in this line of Python:

x = urllib.request.urlopen(‘http://data.pr4e.org/romeo.txt’)
  • socket
  • list
  • file handle
  • dictionary
  • regular expression

2. In this Python code, which line actually reads the data?

import socket

mysock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
mysock.connect((‘data.pr4e.org’, 80))
cmd = ‘GET http://data.pr4e.org/romeo.txt HTTP/1.0\n\n’.encode()
mysock.send(cmd)

while True:
data = mysock.recv(512)
if (len(data) < 1):
break
print(data.decode())
mysock.close()
  • mysock.recv()
  • socket.socket()
  • mysock.close()
  • mysock.connect()
  • mysock.send()

3. Which of the following regular expressions would extract the URL from this line of HTML:

<p>Please click <a href=”http://www.dr-chuck.com”>here</a></p>
  • href=”(.+)”
  • href=”.+”
  • http://.*
  • <.*>

4. In this Python code, which line is most like the open() call to read a file:

import socket

mysock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
mysock.connect((‘data.pr4e.org’, 80))
cmd = ‘GET http://data.pr4e.org/romeo.txt HTTP/1.0\n\n’.encode()
mysock.send(cmd)

while True:
data = mysock.recv(512)
if (len(data) < 1):
break
print(data.decode())
mysock.close()
  • mysock.connect()
  • import socket
  • mysock.recv()
  • mysock.send()
  • socket.socket()

5. Which HTTP header tells the browser the kind of document that is being returned?

  • Content-Type:
  • Metadata:
  • Document-Type:
  • ETag:
  • HTML-Document:

6. What should you check before scraping a web site?

  • That the web site returns HTML for all pages
  • That the web site allows scraping
  • That the web site only has links within the same site
  • That the web site supports the HTTP GET command

7. What is the purpose of the BeautifulSoup Python library?

  • It builds word clouds from web pages
  • It optimizes files that are retrieved many times
  • It allows a web site to choose an attractive skin
  • It repairs and parses HTML to make it easier for a program to understand
  • It animates web operations to make them more attractive

8. What ends up in the "x" variable in the following code:

html = urllib.request.urlopen(url).read()
soup = BeautifulSoup(html, ‘html.parser’)
x = soup(‘a’)
  • A list of all the anchor tags (<a..) in the HTML from the URL
  • True if there were any anchor tags in the HTML from the URL
  • All of the externally linked CSS files in the HTML from the URL
  • All of the paragraphs of the HTML from the URL

9. What is the most common Unicode encoding when moving data between systems?

  • UTF-128
  • UTF-8
  • UTF-16
  • UTF-32
  • UTF-64

10. What is the decimal (Base-10) numeric value for the upper case letter "G" in the ASCII character set?

  • 71
  • 7
  • 103
  • 25073
  • 14

11. What word does the following sequence of numbers represent in ASCII: 108, 105, 110, 101

  • ping
  • line
  • func
  • tree
  • lost

12. How are strings stored internally in Python 3?

  • Byte Code
  • Unicode
  • ASCII
  • EBCDIC
  • UTF-8

13. When reading data across the network (i.e. from a URL) in Python 3, what method must be used to convert it to the internal format used by strings?

  • decode()
  • find()
  • upper()
  • encode()
  • trim()

Scraping HTML Data with BeautifulSoup

Scraping Numbers from HTML using BeautifulSoup In this assignment you will write a Python program similar to http://www.py4e.com/code3/urllink2.py. The program will use urllib to read the HTML from the data files below, and parse the data, extracting numbers and compute the sum of the numbers in the file. We provide two files for this assignment. One is a sample file where we give you the sum for your testing and the other is the actual data you need to process for the assignment. Sample data: http://py4e-data.dr-chuck.net/comments_42.html (Sum=2553) Actual data: http://py4e-data.dr-chuck.net/comments_1913242.html (Sum ends with 79) You do not need to save these files to your folder since your program will read the data directly from the URL. Note: Each student will have a distinct data url for the assignment - so only use your own data url for analysis. Data Format The file is a table of names and comment counts. You can ignore most of the data in the file except for lines like the following: Modu90 Kenzie88 Hubert87 You are to find all the tags in the file and pull out the numbers from the tag and sum the numbers. Look at the sample code provided. It shows how to find all of a certain kind of tag, loop through the tags and extract the various aspects of the tags. ... # Retrieve all of the anchor tags tags = soup('a') for tag in tags: # Look at the parts of a tag print 'TAG:',tag print 'URL:',tag.get('href', None) print 'Contents:',tag.contents[0] print 'Attrs:',tag.attrs You need to adjust this code to look for span tags and pull out the text content of the span tag, convert them to integers and add them up to complete the assignment. Sample Execution $ python3 solution.py Enter - http://py4e-data.dr-chuck.net/comments_42.html Count 50 Sum 2... Turning in the Assignment Enter the sum from the actual data and your Python code below: Sum: (ends with 79) Python code:

from bs4 import BeautifulSoup
import urllib.request

url = input(“Enter URL: “)
html = urllib.request.urlopen(url).read()

soup = BeautifulSoup(html, ‘html.parser’)

tags = soup(‘span’)

total = 0
count = 0

for tag in tags:
total += int(tag.text)
count += 1

print(“Count”, count)
print(“Sum”, total)

Assignment: Following Links in HTML Using BeautifulSoup

Following Links in Python In this assignment you will write a Python program that expands on http://www.py4e.com/code3/urllinks.py. The program will use urllib to read the HTML from the data files below, extract the href= vaues from the anchor tags, scan for a tag that is in a particular position relative to the first name in the list, follow that link and repeat the process a number of times and report the last name you find. We provide two files for this assignment. One is a sample file where we give you the name for your testing and the other is the actual data you need to process for the assignment Sample problem: Start at http://py4e-data.dr-chuck.net/known_by_Fikret.html Find the link at position 3 (the first name is 1). Follow that link. Repeat this process 4 times. The answer is the last name that you retrieve. Sequence of names: Fikret Montgomery Mhairade Butchi Anayah Last name in sequence: Anayah Actual problem: Start at: http://py4e-data.dr-chuck.net/known_by_Enrika.html Find the link at position 18 (the first name is 1). Follow that link. Repeat this process 7 times. The answer is the last name that you retrieve. Hint: The first character of the name of the last page that you will load is: R Strategy The web pages tweak the height between the links and hide the page after a few seconds to make it difficult for you to do the assignment without writing a Python program. But frankly with a little effort and patience you can overcome these attempts to make it a little harder to complete the assignment without writing a Python program. But that is not the point. The point is to write a clever Python program to solve the program. Sample execution Here is a sample execution of a solution: $ python3 solution.py Enter URL: http://py4e-data.dr-chuck.net/known_by_Fikret.html Enter count: 4 Enter position: 3 Retrieving: http://py4e-data.dr-chuck.net/known_by_Fikret.html Retrieving: http://py4e-data.dr-chuck.net/known_by_Montgomery.html Retrieving: http://py4e-data.dr-chuck.net/known_by_Mhairade.html Retrieving: http://py4e-data.dr-chuck.net/known_by_Butchi.html Retrieving: http://py4e-data.dr-chuck.net/known_by_Anayah.html The answer to the assignment for this execution is "Anayah". Turning in the Assignment Enter the last name retrieved and your Python code below: Name: (name starts with R) Python code:

import urllib.request
from bs4 import BeautifulSoup

url = input(“Enter URL: “)
count = int(input(“Enter count: “))
position = int(input(“Enter position: “))

for _ in range(count):
html = urllib.request.urlopen(url).read()

soup = BeautifulSoup(html, ‘html.parser’)

tags = soup(‘a’)

link = tags[position – 1].get(‘href’, None)

print(“Retrieving:”, link)
url = link

last_name = url.split(‘_’)[-1].split(‘.’)[0]

print(“The answer to the assignment is:”, last_name)

Leave a Reply