Python Forum
Failure in web scraping by Beautiful Soup
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Failure in web scraping by Beautiful Soup
#1
When I'm trying to scrape some data from an webpage, this error suddenly come up which never happened before. I did check my browser setting and cookies are enabled. I am in doubt about the server has blocked my access to the website.
<html>
<head>
<script>
Challenge=305158;
ChallengeId=395740057;
GenericErrorMessageCookies="Cookies must be enabled in order to view this page.";
</script>
<script>
function test(var1)
{
    var var_str=""+Challenge;
    var var_arr=var_str.split("");
    var LastDig=var_arr.reverse()[0];
    var minDig=var_arr.sort()[0];
    var subvar1 = (2 * (var_arr[2]))+(var_arr[1]*1);
    var subvar2 = (2 * var_arr[2])+var_arr[1];
    var my_pow=Math.pow(((var_arr[0]*1)+2),var_arr[1]);
    var x=(var1*3+subvar1)*1;
    var y=Math.cos(Math.PI*subvar2);
    var answer=x*y;
    answer-=my_pow*1;
    answer+=(minDig*1)-(LastDig*1);
    answer=answer+subvar2;
    return answer;
}
</script>
<script>
client = null;
if (window.XMLHttpRequest)
{
    var client=new XMLHttpRequest();
}
else
{
    if (window.ActiveXObject)
    {
        client = new ActiveXObject('MSXML2.XMLHTTP.3.0');
    };
}
if (!((!!client)&&(!!Math.pow)&&(!!Math.cos)&&(!![].sort)&&(!![].reverse)))
{
    document.write("Not all needed JavaScript methods are supported.<BR>");

}
else
{
    client.onreadystatechange  = function()
    {
        if(client.readyState  == 4)
        {
            var MyCookie=client.getResponseHeader("X-AA-Cookie-Value");
            if ((MyCookie == null) || (MyCookie==""))
            {
                document.write(client.responseText);
                return;
            }

            var cookieName = MyCookie.split('=')[0];
            if (document.cookie.indexOf(cookieName)==-1)
            {
                document.write(GenericErrorMessageCookies);
                return;
            }
            window.location.reload(true);
        }
    };
    y=test(Challenge);
    client.open("POST",window.location,true);
    client.setRequestHeader('X-AA-Challenge-ID', ChallengeId);
    client.setRequestHeader('X-AA-Challenge-Result',y);
    client.setRequestHeader('X-AA-Challenge',Challenge);
    client.setRequestHeader('Content-Type' , 'text/plain');
    client.send();
}
</script>
</head>
<body>
<noscript>JavaScript must be enabled in order to view this page.</noscript>
</body>
</html>
Things I've tried:

Swapping requests for requests.session()
Adding useragent to the browser
ensuring same packages are installed
Reply
#2
based on this
Quote:GenericErrorMessageCookies="Cookies must be enabled in order to view this page.";
I would suggest that your program is not using cookies.
(Mar-17-2019, 08:16 AM)yeungcase Wrote: I did check my browser setting and cookies are enabled.
Your browser settings has nothing to do with requests in python as that is what is sending the request data. Are you sending the cookie via requests module?

Show us your code.
Recommended Tutorials:
Reply
#3
(Mar-17-2019, 10:18 AM)metulburr Wrote: based on this
Quote:GenericErrorMessageCookies="Cookies must be enabled in order to view this page.";
I would suggest that your program is not using cookies.
(Mar-17-2019, 08:16 AM)yeungcase Wrote: I did check my browser setting and cookies are enabled.
Your browser settings has nothing to do with requests in python as that is what is sending the request data. Are you sending the cookie via requests module?

Show us your code.

from urllib.request import urlopen
from bs4 import BeautifulSoup
import requests
import pandas as pd
import xlsxwriter
import re
import os

## Scrapping all racing day on the site
race_day_place = 'HV'
race_day_url='https://racing.hkjc.com/racing/info/meeting/Results/English/Local/'
race_day_url_content = requests.get(race_day_url)
race_day_url_content.encoding = 'utf-8'
race_day_url_html_content = race_day_url_content.text
race_day_soup = BeautifulSoup(race_day_url_html_content, 'lxml')

race_day_soup2 = race_day_soup.find('div', class_="rowDiv5")
race_day = race_day_soup2.find('td', class_="tdAlignR")
options = race_day.find_all("option", {'value':re.compile('^Local')} )
raceday = options[1:]

jc_raceday_list = []
for each in raceday:
        value = each.text
        jc_raceday_list.append(value)

## Scrapping all racing day in my folder       
jay_raceday = os.listdir('C://AnyDirectory')
jay_raceday2 = []
for eachfile in jay_raceday:
    os.path.splitext(eachfile)[0]
    jay_raceday2.append(eachfile[0:10])

jay_raceday3 = [d[8:10]+"/"+d[5:7]+"/"+d[:4] for d in jay_raceday2]

## Identify the difference above and append it in a list       
daydeviation = []
for day in jc_raceday_list:
    if day not in jay_raceday3:
        daydeviation.append(day)

## Convert into appropriate format               
for each_deviation in daydeviation:
    each_deviation = [d[6:10]+d[3:5]+d[0:2] for d in daydeviation]

## Looping all missing racing day
for deviation in each_deviation:
    ## Scrapping entries data    
    booklet_name = deviation[0:4]+'-'+deviation[4:6]+'-'+deviation[6:9]  
    entries_race_place = 'HV'
    entries_url = 'http://racing.hkjc.com/racing/info/meeting/Entries/English/Local/'+deviation+'/'+entries_race_place
    entries_request = requests.get(entries_url)
    entries_request.encoding = 'utf-8'
    entries_request_html_content = entries_request.text
    entries_soup = BeautifulSoup(entries_request_html_content, 'lxml')
    entries_table = entries_soup.find('table', class_='col_12')

    if entries_table is None:
        entries_race_place = 'ST' 
        entries_url = 'http://racing.hkjc.com/racing/info/meeting/Entries/English/Local/'+deviation+'/'+entries_race_place
        entries_request = requests.get(entries_url)
        entries_request.encoding = 'utf-8'
        entries_request_html_content = entries_request.text
        entries_soup = BeautifulSoup(entries_request_html_content, 'lxml')
        entries_table = entries_soup.find('table', class_='col_12')

    if entries_table:
        entries_trs = entries_table.find_all('tr')
        entries_content = []
        for entries_tr in entries_trs[6:]:
            for entries_td2 in entries_tr.find_all('td',  {'class': ['alignL2', 'alignL2-grey']}):
                entries_content.append(entries_td2.text.strip('\n\r\t": '))      

    writer = pd.ExcelWriter('C:\\AnyDirectory\\'+booklet_name+'.xlsx', engine='xlsxwriter')

    ## Scrapping all the result
    for page in range (1,13):
        result_race_place = 'HV'
        result_url = 'http://racing.hkjc.com/racing/info/meeting/Results/English/Local/'+deviation+'/'+result_race_place+'/'+str(page)
        result_request = requests.get(result_url)
        result_request.encoding = 'utf-8'
        result_html_content = result_request.text
        result_soup = BeautifulSoup(result_html_content, 'lxml')
        result_table = result_soup.find('table', class_='tableBorder trBgBlue tdAlignC number12 draggable')

        if result_table is None:
            result_race_place = 'ST' 
            result_url = 'http://racing.hkjc.com/racing/info/meeting/Results/English/Local/'+deviation+'/'+result_race_place+'/'+str(page)
            result_request = requests.get(result_url)
            result_request.encoding = 'utf-8'
            result_html_content = result_request.text
            result_soup = BeautifulSoup(result_html_content, 'lxml')
            result_table = result_soup.find('table', class_='tableBorder trBgBlue tdAlignC number12 draggable')

        if result_table:
            hds = result_soup.find('thead')
            if hds:
                headers = []
                for hds_td in hds.find_all('td'):
                    headers.append(hds_td.text.strip('\n\r\t": '))
                headers += ['Ace']        

                result_content = []
                result_row = []
                result_trs = result_table.find_all('tr', {'class': ['trBgGrey', 'trBgWhite']})
                for result_tr in result_trs:
                    result_tds = result_tr.find_all('td', {'nowrap': 'nowrap'})
                    for result_td in result_tds:
                        result_row.append(result_td.text.strip('\n\r\t": '))
                    result_content.append(result_row)
                    result_row = []

                for each_result in result_content:
                    new_result = each_result[2].split(sep='(')[0]
                    for that in entries_content:
                        if new_result in that and ('+' or '*' or '#') in that:
                            answer = that.split(sep=new_result)[1][1]                   
                            if answer.isdigit():
                                ace = '-'
                            else:
                                ace = answer
                            each_result.append(ace)

                        elif new_result in that and ('+' or '*' or '#') not in that:
                            ace = '-'
                            each_result.append(ace)

                        if len(each_result) > 13:
                            del each_result[-1]

                df = pd.DataFrame(result_content, columns=headers)
                df.to_excel(writer, sheet_name='Race'+str(page))
        else:
            continue
Reply
#4
anyone could help?
Reply
#5
You can run document.cookie in your console to read all the cookies accessible from that location.

Quote:document.write("Not all needed JavaScript methods are supported.<BR>");
Quote:<noscript>JavaScript must be enabled in order to view this page.</noscript>

Its possible they changed their site to include javascript? If so, then it would stop requests in in tracks. If it does have javascript you are going to need Selenium to accomplish your task instead. It doesnt have to be the main page, any portion of information you are getting could be obtained via javascript. I often have to change my scripts as admins change the HTML or add javascript to avoid bots.

It is also entirely possible they have detected your bot as you have a fair number of requests (making a request per entry / per page). They can rate-limit in iptables to greatly reduce the request volume per source.

However based on your first post, i would suggest either cookies or javascript is the issue.

using selenium does gt the html by the way without much hassle

from selenium import webdriver

race_day_url='https://racing.hkjc.com/racing/info/meeting/Results/English/Local/'

browser = webdriver.Firefox()
browser.get(race_day_url)
time.sleep(3)
print(browser.page_source)
Recommended Tutorials:
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Beautiful Soup - access a rating value in a class KatMac 1 3,419 Apr-16-2021, 01:27 PM
Last Post: snippsat
  *Beginner* web scraping/Beautiful Soup help 7ken8 2 2,560 Jan-28-2021, 04:26 PM
Last Post: 7ken8
  Help: Beautiful Soup - Parsing HTML table ironfelix717 2 2,622 Oct-01-2020, 02:19 PM
Last Post: snippsat
  Beautiful Soup (suddenly) doesn't get full webpage html j.crater 8 16,381 Jul-11-2020, 04:31 PM
Last Post: j.crater
  Requests-HTML vs Beautiful Soup - How to Choose? robin73 0 3,780 Jun-23-2020, 02:53 PM
Last Post: robin73
  looking for direction - scrappy, crawler, beautiful soup Sly_Corn 2 2,401 Mar-17-2020, 03:17 PM
Last Post: Sly_Corn
  Beautiful soup truncates results jonesjoz 4 3,793 Mar-09-2020, 06:04 PM
Last Post: jonesjoz
  Beautiful soup and tags starter_student 11 6,047 Jul-08-2019, 03:41 PM
Last Post: starter_student
  Beautiful Soup find_all() kirito85 2 3,311 Jun-14-2019, 02:17 AM
Last Post: kirito85
  [split] Using beautiful soup to get html attribute value moski 6 6,220 Jun-03-2019, 04:24 PM
Last Post: moski

Forum Jump:

User Panel Messages

Announcements
Announcement #1 8/1/2020
Announcement #2 8/2/2020
Announcement #3 8/6/2020