{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Analyzing the Gutenberg Books Corpus - part 2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this notebook, we will use the Gutenberg Corpus in the same form as last week. \n", "\n", "In the [first analysis notebook](https://github.com/dslab2018/dslab2018.github.io/blob/master/notebooks/DSLab_week7_gutenberg_corpus.ipynb) we explored various RDD methods and in the end built an N-gram viewer for the gutenberg books project. Now, we will use the corpus to train a simple language classification model using [Spark's machine learning library](http://spark.apache.org/docs/latest/mllib-guide.html) and Spark DataFrames.\n", "\n", "
\n", "

The structure of this lab is as follows:

\n", "\n", "
    \n", "
  1. initializing Spark and loading data
  2. \n", "
  3. construction of Spark DataFrames
  4. \n", "
  5. using core DataFrame functionality and comparisons to RDD methods
  6. \n", "
  7. using the Spark ML library for vectorization
  8. \n", "
  9. building a classifier pipeline
  10. \n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Set up and launch the Spark runtime *on your laptop*" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# set this to the base spark directory on your system\n", "spark_home = '/Users/rok/src/spark'\n", "try:\n", " import findspark\n", " findspark.init(spark_home)\n", "except ModuleNotFoundError as e:\n", " print('Info: {}'.format(e))\n", "\n", "import getpass\n", "import pyspark" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "scrolled": true }, "outputs": [], "source": [ "from pyspark.sql import SparkSession\n", "\n", "spark = SparkSession \\\n", " .builder \\\n", " .master(\"local[2]\") \\\n", " .appName(\"Gutenberg text modelling\") \\\n", " .config(\"spark.driver.host\", \"localhost\") \\\n", " .getOrCreate()" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/html": [ "\n", "
\n", "

SparkSession - in-memory

\n", " \n", "
\n", "

SparkContext

\n", "\n", "

Spark UI

\n", "\n", "
\n", "
Version
\n", "
v2.3.0
\n", "
Master
\n", "
local[2]
\n", "
AppName
\n", "
Gutenberg text modelling
\n", "
\n", "
\n", " \n", "
\n", " " ], "text/plain": [ "" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "sc = spark.sparkContext\n", "spark" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Load the data\n", "\n", "**TODO**: download the gutenberg_cleaned_rdd and extract it into the `data` directory in the base path of this repository.\n", "\n", "Load this as `cleaned_rdd` using `sc.sequenceFile`." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "cleaned_rdd = sc.sequenceFile('../data/gutenberg_cleaned_rdd/')" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 36.5 ms, sys: 13.9 ms, total: 50.4 ms\n", "Wall time: 5.5 s\n" ] }, { "data": { "text/plain": [ "25198" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%time cleaned_rdd.cache().count()" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/plain": [ "'h_sides_dion_cassius_lx_35_says_that_seneca_composed_an_greek_apokolokuntosis_or_pumpkinification_of_claudius_after_his_death_the_title_being_a_parody_of_the_usual_greek_apotheosis_but_this_title_is_n'" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "cleaned_rdd.first()[1][:200]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that there were a few further pre-processing steps: we removed all punctuation, made the text lowercase, and replaced whitespace characters with \"_\"." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Load in the metadata dictionary and broadcast it\n", "\n", "Just as in the previous notebook, we will load our pre-generated metadata dictionary and broadcast it to all the executors. " ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "scrolled": true }, "outputs": [], "source": [ "import json\n", "\n", "with open('../data/gutenberg_metadata.json', 'r') as f :\n", " meta = json.load(f)" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# TODO: create meta_b by broadcasting meta_dict\n", "meta_b = spark.sparkContext.broadcast(meta)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## DataFrames\n", "\n", "A [`DataFrame`](http://spark.apache.org/docs/latest/sql-programming-guide.html#creating-dataframes) is analogous to Pandas or R dataframes. They are since v2.0 the \"official\" API for Spark and importantly, the development of the [machine learning library](http://spark.apache.org/docs/latest/ml-guide.html) is focused exclusively on the DataFrame API. Many low-level optimizations have been developed for DataFrames in recent versions of Spark, so that the overheads of using Python with Spark have also been minimized somewhat. Using DataFrames allows you to specify types for your operations which means that they can be offloaded to the Scala backend and optimized by the runtime. \n", "\n", "However, you frequently will find that there simply is no easy way of doing a particular operation with the DataFrame methods and will need to resort to the lower-level RDD API. \n", "\n", "## Creating a DataFrame\n", "\n", "Here we will create a DataFrame out of the RDD that we were using in the previous excercies. The DataFrame is a much more natural fit for this dataset. The inclusion of the book metadata is much more natural here, simply as columns which can then be used in queries. \n", "\n", "To begin, we will map the RDD elements to type [Row](http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.Row) and recast the data as a DataFrame. Note that we are lazy here and are just using the default `StringType` for all columns, but we could be more specific and use e.g. `IntegerType` for the `gid` field. " ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "scrolled": true }, "outputs": [], "source": [ "from pyspark.sql import Row\n", "from pyspark.sql.types import IntegerType, StringType, ArrayType, StructField, StructType\n", "\n", "# set up the Row \n", "df = spark.createDataFrame(\n", " cleaned_rdd.map(lambda x: Row(**meta_b.value[x[0]], text=x[1])), \n", ").cache()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For inspection, the `Row` class can be conveniently cast into a `dict`:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/plain": [ "{'author_id': '1308',\n", " 'author_name': ['Seneca', ' Lucius Annaeus'],\n", " 'birth_year': '1863',\n", " 'death_year': '65',\n", " 'downloads': '186',\n", " 'first_name': 'Lucius Annaeus',\n", " 'gid': '10001',\n", " 'language': 'en',\n", " 'last_name': 'Seneca',\n", " 'license': 'Public domain in the USA.',\n", " 'subtitle': '',\n", " 'text': 'h_sides_dion_cassius_lx_35_says_that_seneca_composed_an_greek_apokolokuntosis_or_pumpkinification_of_claudius_after_his_death_the_title_being_a_parody_of_the_usual_greek_apotheosis_but_this_title_is_not_given_in_the_mss_of_the_ludus_de_morte_claudii_nor_is_there_anything_in_the_piece_which_suits_the_title_very_well_as_a_literary_form_the_piece_belongs_to_the_class_called_satura_menippea_a_satiric_medley_in_prose_and_verse_this_text_is_that_of_buecheler_with_a_few_trifling_changes_which_are_indicated_in_the_notes_we_have_been_courteously_allowed_by_messrs_weidmann_to_use_this_text_i_have_to_acknowledge_the_help_of_mr_balls_notes_from_which_i_have_taken_a_few_references_but_my_translation_was_made_many_years_ago_whd_rouse_bibliography_editio_princeps_lucii_annaei_senecae_in_morte_claudii_caesaris_ludus_nuper_repertus_rome_1513_latest_critical_text_franz_buecheler_weidmann_1904_a_reprint_with_a_few_changes_of_the_text_from_a_larger_work_divi_claudii_greek_apokolokuntosis_in_the_symbola_philologorum_bonnensium_fasc_i_1864_translations_and_helps_the_satire_of_seneca_on_the_apotheosis_of_claudius_by_ap_ball_with_introduction_notes_and_translations_new_york_columbia_university_press_london_macmillan_1902_seneca_apocolocyntosis_or_ludus_de_morte_claudii_the_pumpkinification_of_claudius_i_wish_to_place_on_record_the_proceedings_in_heaven_1_october_13_last_of_the_new_year_which_begins_this_auspicious_age_it_shall_be_done_without_malice_or_favour_this_is_the_truth_ask_if_you_like_how_i_know_it_to_begin_with_i_am_not_bound_to_please_you_with_my_answer_who_will_compel_me_i_know_the_same_day_made_me_free_which_was_the_last_day_for_him_who_made_the_proverb_trueone_must_be_born_either_a_pharaoh_or_a_fool_if_i_choose_to_answer_i_will_say_whatever_trips_off_my_tongue_who_has_ever_made_the_historian_produce_witness_to_swear_for_him_but_if_an_authority_must_be_produced_ask_of_the_man_who_saw_drusilla_translated_to_heaven_the_same_man_will_aver_he_saw_claudius_on_the_road_dot_and_carry_one_sidenote_virg_aen_ii_724_will_he_nill_he_all_that_happens_in_heaven_he_needs_must_see_he_is_the_custodian_of_the_appian_way_by_that_route_you_know_both_tiberius_and_augustus_went_up_to_the_gods_question_him_he_will_tell_you_the_tale_when_you_are_alone_before_company_he_is_dumb_you_see_he_swore_in_the_senate_that_he_beheld_drusilla_mounting_heavenwards_and_all_he_got_for_his_good_news_was_that_everybody_gave_him_the_lie_since_when_he_solemnly_swears_he_will_never_bear_witness_again_to_what_he_has_seen_not_even_if_he_had_seen_a_man_murdered_in_open_market_what_he_told_me_i_report_plain_and_clear_as_i_hope_for_his_health_and_happiness_now_had_the_sun_with_shorter_course_drawn_in_his_risen_light_2_and_by_equivalent_degrees_grew_the_dark_hours_of_night_victorious_cynthia_now_held_sway_over_a_wider_space_grim_winter_drove_rich_autumn_out_and_now_usurped_his_place_and_now_the_fiat_had_gone_forth_that_bacchus_must_grow_old_the_few_last_clusters_of_the_vine_were_gathered_ere_the_cold_i_shall_make_myself_better_understood_if_i_say_the_month_was_october_the_day_was_the_thirteenth_what_hour_it_was_i_cannot_certainly_tell_philosophers_will_agree_more_often_than_clocks_but_it_was_between_midday_and_one_after_noon_clumsy_creature_you_say_the_poets_are_not_content_to_describe_sunrise_and_sunset_and_now_they_even_disturb_the_midday_siesta_will_you_thus_neglect_so_good_an_hour_now_the_suns_chariot_had_gone_by_the_middle_of_his_way_half_wearily_he_shook_the_reins_nearer_to_night_than_day_and_led_the_light_along_the_slope_that_down_before_him_lay_claudius_began_to_breathe_his_last_and_could_not_3_make_an_end_of_the_matter_then_mercury_who_had_always_been_much_pleased_with_his_wit_drew_aside_one_of_the_three_fates_and_said_cruel_beldame_why_do_you_let_the_poor_wretch_be_tormented_after_all_this_torture_cannot_he_have_a_rest_four_and_sixty_years_it_is_now_since_he_began_to_pant_for_breath_what_grudge_is_this_you_bear_against_him_and_the_whole_empire_do_let_the_astrologers_tell_the_truth_for_once_since_he_became_emperor_they_have_never_let_a_year_pass_never_a_month_without_laying_him_out_for_his_burial_yet_it_is_no_wonder_if_they_are_wrong_and_no_one_knows_his_hour_nobody_ever_believed_he_was_really_quite_born_footnote_a_proverb_for_a_nobody_as_petron_58_qui_te_natum_non_putat_do_what_has_to_be_done_kill_him_and_let_a_better_man_rule_in_empty_court_sidenote_virg_georg_iv_90_clotho_replied_upon_my_word_i_did_wish_to_give_him_another_hour_or_two_until_he_should_make_roman_citizens_of_the_half_dozen_who_are_still_outsiders_he_made_up_his_mind_you_know_to_see_the_whole_world_in_the_toga_greeks_gauls_spaniards_britons_and_all_but_since_it_is_your_pleasure_to_leave_a_few_foreigners_for_seed_and_since_you_command_me_so_be_it_she_opened_her_box_and_out_came_three_spindles_one_was_for_augurinus_one_for_baba_one_for_claudius_footnote_augurinus_unknown_baba_see_sep_ep_159_a_fool_these_three_she_says_i_will_cause_to_die_within_one_year_and_at_no_great_distance_apart_and_i_will_not_dismiss_him_unattended_think_of_all_the_thousands_of_men_he_was_wont_to_see_following_after_him_thousands_going_before_thousands_all_crowding_about_him_and_it_would_never_do_to_leave_him_alone_on_a_sudden_these_boon_companions_will_satisfy_him_for_the_nonce_this_said_she_twists_the_thread_around_his_ugly_spindle_once_4_snaps_off_the_last_bit_of_the_life_of_that_imperial_dunce_but_lachesis_her_hair_adorned_her_tresses_neatly_bound_pierian_laurel_on_her_locks_her_brows_with_garlands_crowned_plucks_me_from_out_the_snowy_wool_new_threads_as_white_as_snow_which_handled_with_a_happy_touch_change_colour_as_they_go_not_common_wool_but_golden_wire_the_sisters_wondering_gaze_as_age_by_age_the_pretty_thread_runs_down_the_golden_days_world_without_end_they_spin_away_the_happy_fleeces_pull_what_joy_they_take_to_fill_their_hands_with_that_delightful_wool_indeed_the_task_performs_itself_no_toil_the_spinners_know_down_drops_the_soft_and_silken_thread_as_round_the_spindles_go_fewer_than_these_are_tithons_years_not_nestors_life_so_long_phoebus_is_present_glad_he_is_to_sing_a_merry_song_now_helps_the_work_now_full_of_hope_upon_the_harp_doth_play_the_sisters_listen_to_the_song_that_charms_their_toil_away_they_praise_their_brothers_melodies_and_still_the_spindles_run_till_more_than_mans_allotted_span_the_busy_hands_have_spun_then_phoebus_says_o_sister_fates_i_pray_take_none_away_but_suffer_this_one_life_to_be_longer_than_mortal_day_like_me_in_face_and_lovely_grace_like_me_in_voice_and_song_hell_bid_the_laws_at_length_speak_out_that_have_been_dumb_so_long_will_give_unto_the_weary_world_years_prosperous_and_bright_like_as_the_daystar_from_on_high_scatters_the_stars_of_night_as_when_the_stars_return_again_clear_hesper_brings_his_light_or_as_the_ruddy_dawn_drives_out_the_dark_and_brings_the_day_as_the_bright_sun_looks_on_the_world_and_speeds_along_its_way_his_rising_car_from_mornings_gates_so_caesar_doth_arise_so_nero_shows_his_face_to_rome_before_the_peoples_eyes_his_bright_and_shining_countenance_illumines_all_the_air_while_down_upon_his_graceful_neck_fall_rippling_waves_of_hair_thus_apollo_but_lachesis_quite_as_ready_to_cast_a_favourable_eye_on_a_handsome_man_spins_away_by_the_handful_and_bestows_years_and_years_upon_nero_out_of_her_own_pocket_as_for_claudius_they_tell_everybody_to_speed_him_on_his_way_with_cries_of_joy_and_solemn_litany_at_once_he_bubbled_up_the_ghost_and_there_was_an_end_to_that_shadow_of_a_life_he_was_listening_to_a_troupe_of_comedians_when_he_died_so_you_see_i_have_reason_to_fear_those_gentry_the_last_words_he_was_heard_to_speak_in_this_world_were_these_when_he_had_made_a_great_noise_with_that_end_of_him_which_talked_easiest_he_cried_out_oh_dear_oh_dear_i_think_i_have_made_a_mess_of_myself_whether_he_did_or_no_i_cannot_say_but_certain_it_is_he_always_did_make_a_mess_of_everything_what_happened_next_on_earth_it_is_mere_waste_of_5_time_to_tell_for_you_know_it_all_well_enough_and_there_is_no_fear_of_your_ever_forgetting_the_impression_which_that_public_rejoicing_made_on_your_memory_no_one_forgets_his_own_happiness_what_happened_in_heaven_you_shall_hear_for_proof_please_apply_to_my_informant_word_comes_to_jupiter_that_a_stranger_had_arrived_a_man_well_set_up_pretty_grey_he_seemed_to_be_threatening_something_for_he_wagged_his_head_ceaselessly_he_dragged_the_right_foot_they_asked_him_what_nation_he_was_of_he_answered_something_in_a_confused_mumbling_voice_his_language_they_did_not_understand_he_was_no_greek_and_no_roman_nor_of_any_known_race_on_this_jupiter_bids_hercules_go_and_find_out_what_country_he_comes_from_you_see_hercules_had_travelled_over_the_whole_world_and_might_be_expected_to_know_all_the_nations_in_it_but_hercules_the_first_glimpse_he_got_was_really_much_taken_aback_although_not_all_the_monsters_in_the_world_could_frighten_him_when_he_saw_this_new_kind_of_object_with_its_extraordinary_gait_and_the_voice_of_no_terrestrial_beast_but_such_as_you_might_hear_in_the_leviathans_of_the_deep_hoarse_and_inarticulate_he_thoug',\n", " 'title': 'Apocolocyntosis'}" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# first row\n", "df.first().asDict()" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/plain": [ "['author_id',\n", " 'author_name',\n", " 'birth_year',\n", " 'death_year',\n", " 'downloads',\n", " 'first_name',\n", " 'gid',\n", " 'language',\n", " 'last_name',\n", " 'license',\n", " 'subtitle',\n", " 'text',\n", " 'title']" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df.columns" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The DataFrame includes convenience methods for quickly inspecting the data. For example:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "+-------+------------------+\n", "|summary| birth_year|\n", "+-------+------------------+\n", "| count| 20934|\n", "| mean|1829.9672587614018|\n", "| stddev|114.48079532175821|\n", "| min| -100 BC|\n", "| max| 973|\n", "+-------+------------------+\n", "\n" ] } ], "source": [ "df.describe('birth_year').show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Certain operations are much more covenient with the DataFrame API, such as `groupBy`, which yields a special [`GroupedData`](http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.GroupedData) object. Check out the API for the different operations you can perform on grouped data -- here we use `count` to get the equivalent of our author-count from the previous exercise:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "+--------------------+-----+\n", "| author_name|count|\n", "+--------------------+-----+\n", "| [Various]| 1654|\n", "| null| 835|\n", "| [Anonymous]| 278|\n", "|[Balzac, Honoré de]| 121|\n", "|[Kingston, Willi...| 113|\n", "| [Twain, Mark]| 104|\n", "|[Ballantyne, R. ...| 95|\n", "|[Jacobs, W. W. (...| 94|\n", "| [Unknown]| 92|\n", "|[Shakespeare, Wi...| 87|\n", "| [Pepys, Samuel]| 85|\n", "|[Fenn, George Ma...| 83|\n", "| [Dumas, Alexandre]| 75|\n", "| [Verne, Jules]| 74|\n", "| [Sand, George]| 73|\n", "|[Howells, Willia...| 70|\n", "|[Churchill, Wins...| 67|\n", "| [Dickens, Charles]| 61|\n", "|[Henty, G. A. (G...| 60|\n", "| [Harte, Bret]| 58|\n", "+--------------------+-----+\n", "only showing top 20 rows\n", "\n" ] } ], "source": [ "(df.groupBy('author_name')\n", " .count()\n", " .sort('count', ascending=False)\n", " .show()\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Creating new columns\n", "\n", "Lets make a new column with a publication date similar to the previous notebook:" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "scrolled": true }, "outputs": [], "source": [ "df = df.withColumn('publication_year', (df.birth_year + 40))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**TODO**: Show author name, title and publication year; sort by publication_year in descending order" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "+-----------------+--------------------+----------------+\n", "| author_name| title|publication_year|\n", "+-----------------+--------------------+----------------+\n", "| [Blade, Zoë]| Identity| 2021.0|\n", "| [Blade, Zoë]| Less than Human| 2021.0|\n", "|[Doctorow, Cory]| A Place so Foreign| 2011.0|\n", "|[Doctorow, Cory]|Eastern Standard ...| 2011.0|\n", "|[Doctorow, Cory]| Little Brother| 2011.0|\n", "|[Doctorow, Cory]|Return to Pleasur...| 2011.0|\n", "|[Doctorow, Cory]|Someone Comes to ...| 2011.0|\n", "|[Doctorow, Cory]|Ebooks: Neither E...| 2011.0|\n", "|[Doctorow, Cory]|Home Again, Home ...| 2011.0|\n", "|[Doctorow, Cory]| Printcrime| 2011.0|\n", "|[Doctorow, Cory]| Craphound| 2011.0|\n", "|[Doctorow, Cory]|Super Man and the...| 2011.0|\n", "|[Doctorow, Cory]|Shadow of the Mot...| 2011.0|\n", "|[Camacho, Jorge]|La Majstro kaj Ma...| 2006.0|\n", "|[Camacho, Jorge]|La liturgio de l'...| 2006.0|\n", "|[Camacho, Jorge]|La liturgio de l'...| 2006.0|\n", "|[Vaknin, Samuel]|Cyclopedia of Phi...| 2001.0|\n", "|[Vaknin, Samuel]| The Capgras Shift| 2001.0|\n", "| [Obama, Barack]|Inaugural Preside...| 2001.0|\n", "|[Vaknin, Samuel]|The Suffering of ...| 2001.0|\n", "+-----------------+--------------------+----------------+\n", "only showing top 20 rows\n", "\n" ] } ], "source": [ "df.select('author_name', 'title', 'publication_year').sort(df.publication_year.desc()).show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Language classification with Spark ML\n", "\n", "Here we will use some of the same techniques we developed in the last excercise, but this time we will use the built-in methods of the [Spark ML library](http://spark.apache.org/docs/2.2.0/api/python/pyspark.ml#) instead of coding up our own transformation functions. We will apply the N-Gram technique to build a simple language classification model. \n", "\n", "The method is rather straightforward and outlined in [Cavnar & Trenkle 1994](http://odur.let.rug.nl/~vannoord/TextCat/textcat.pdf):\n", "\n", "For each of the English/German training sets:\n", "\n", "1. tokenize the text (spaces are also tokens, so we replace them with \"_\")\n", "2. extract N-grams where 1 < N < 5\n", "3. determine the most common N-grams for each corpus\n", "4. encode both sets of documents using the combined top ngrams\n", "\n", "\n", "## Character tokens vs. Word tokens\n", "In the last notebook, we used words as \"tokens\" -- now we will use characters, even accounting for white space (which we have replaced with \"_\" above). We will use the two example sentences again:\n", "\n", " document 1: \"John likes to watch movies. Mary likes movies too.\"\n", " document 2: \"John also likes to watch football games\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## SparkML feature transformers\n", "\n", "The SparkML library includes many data transformers that all support the same API (much in the same vein as Scikit-Learn). Here we are using the [`CountVectorizer`](http://spark.apache.org/docs/latest/api/python/pyspark.ml.html#pyspark.ml.feature.CountVectorizer), [`NGram`](http://spark.apache.org/docs/latest/api/python/pyspark.ml.html#pyspark.ml.feature.NGram) and [`RegexTokenizer`](http://spark.apache.org/docs/latest/api/python/pyspark.ml.html#pyspark.ml.feature.RegexTokenizer). " ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "scrolled": true }, "outputs": [], "source": [ "from pyspark.ml.feature import CountVectorizer, NGram, RegexTokenizer" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Define the transformations\n", "\n", "We instantiate the three transformers that will be applied in turn. We will pass the output of one as the input of the next -- in the end our DataFrame will contain a column `vectors` that will be the vectorized version of the documents. " ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "scrolled": true }, "outputs": [], "source": [ "regex_tokenizer = RegexTokenizer(inputCol=\"text\", outputCol=\"tokens\", gaps=False, pattern='\\S')\n", "ngram = NGram(n=2, inputCol='tokens', outputCol='ngrams')\n", "count_vectorizer = CountVectorizer(inputCol=\"ngrams\", outputCol=\"vectors\", vocabSize=1000)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So lets see what this does to our test sentences:" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/plain": [ "[Row(text='John likes to watch movies. Mary likes movies too.'),\n", " Row(text='John also likes to watch football games')]" ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "test_df = spark.createDataFrame([('John likes to watch movies. Mary likes movies too.',), ('John also likes to watch football games',)], ['text'])\n", "\n", "test_df.collect()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**TODO** Figure out how to run the `test_df` through the two transformers and generate an `test_ngram_df`. `show()` the `text`, `tokens`, and `ngrams` columns." ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "+--------------------+--------------------+--------------------+\n", "| text| tokens| ngrams|\n", "+--------------------+--------------------+--------------------+\n", "|John likes to wat...|[j, o, h, n, l, i...|[j o, o h, h n, n...|\n", "|John also likes t...|[j, o, h, n, a, l...|[j o, o h, h n, n...|\n", "+--------------------+--------------------+--------------------+\n", "\n" ] } ], "source": [ "test_ngram_df = ngram.transform(\n", " regex_tokenizer.transform(test_df)\n", ")\n", "test_ngram_df.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**TODO**: Fit the `CountVectorizer` with `n=2` ngrams and store in `test_cv_model`:" ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "scrolled": true }, "outputs": [], "source": [ "test_cv_model = count_vectorizer.fit(test_ngram_df)" ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/plain": [ "['e s',\n", " 'i k',\n", " 'l i',\n", " 's t',\n", " 'k e',\n", " 't o',\n", " 'c h',\n", " 'm o',\n", " 'j o',\n", " 'i e',\n", " 'o w',\n", " 'a t',\n", " 'o o',\n", " 't c',\n", " 'h n',\n", " 'v i',\n", " 'o v',\n", " 'o h',\n", " 'w a',\n", " 'a l',\n", " 't b',\n", " 's o',\n", " 'm e',\n", " 'l l',\n", " 'y l',\n", " 'h f',\n", " 'm a',\n", " 'g a',\n", " 'n l',\n", " 'o l',\n", " 'f o',\n", " 's .',\n", " 'n a',\n", " 'b a',\n", " 'a m',\n", " 'l g',\n", " 's m',\n", " 'a r',\n", " 'o .',\n", " 'h m',\n", " '. m',\n", " 'l s',\n", " 'o t',\n", " 'r y']" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "test_cv_model.vocabulary" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**TODO**: transform `test_ngram_df` into vectors:" ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+\n", "|vectors |\n", "+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+\n", "|(44,[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,24,26,28,31,36,37,38,39,40,43],[4.0,2.0,2.0,2.0,2.0,2.0,1.0,2.0,1.0,2.0,1.0,1.0,1.0,1.0,1.0,2.0,2.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0]) |\n", "|(44,[0,1,2,3,4,5,6,8,10,11,12,13,14,17,18,19,20,21,22,23,25,27,29,30,32,33,34,35,41,42],[2.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,2.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0])|\n", "+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+\n", "\n" ] } ], "source": [ "test_cv_model.transform(test_ngram_df).select('vectors').show(truncate=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## ML Pipelines\n", "\n", "Keeping track of these steps is a bit tedious -- if we wanted to repeat the above steps on different data, we would either have to write a wrapper function or re-execute all the cells again. It would be great if we could create a *pipeline* that encapsulated these steps and all we had to do was provide the inputs and parameters. \n", "\n", "The Spark ML library includes this concept of [Pipelines](https://spark.apache.org/docs/2.2.0/ml-pipeline.html) and we can use it to simplify complex ML workflows." ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "scrolled": true }, "outputs": [], "source": [ "from pyspark.ml import Pipeline" ] }, { "cell_type": "code", "execution_count": 27, "metadata": { "scrolled": true }, "outputs": [], "source": [ "cv_pipeline = Pipeline(\n", " stages=[\n", " regex_tokenizer,\n", " ngram,\n", " count_vectorizer,\n", " ]\n", ")" ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "+--------------------+--------------------+--------------------+--------------------+\n", "| text| tokens| ngrams| vectors|\n", "+--------------------+--------------------+--------------------+--------------------+\n", "|John likes to wat...|[j, o, h, n, l, i...|[j o, o h, h n, n...|(44,[0,1,2,3,4,5,...|\n", "|John also likes t...|[j, o, h, n, a, l...|[j o, o h, h n, n...|(44,[0,1,2,3,4,5,...|\n", "+--------------------+--------------------+--------------------+--------------------+\n", "\n" ] } ], "source": [ "(\n", " cv_pipeline.fit(test_df)\n", " .transform(test_df)\n", " .show()\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is much more concise and much less error prone! The really cool thing about pipelines is that I can now very easily change the parameters of the different components. Imagine we wanted to fit trigrams (`n=3`) instead of bigrams (`n=2`), and we wanted to change the name of the final column. We can reuse the same pipeline but feed it a *parameter map* specifying the changed parameter value:" ] }, { "cell_type": "code", "execution_count": 29, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "+--------------------+--------------------+--------------------+--------------------+\n", "| text| tokens| ngrams| new_vectors|\n", "+--------------------+--------------------+--------------------+--------------------+\n", "|John likes to wat...|[j, o, h, n, l, i...|[j o h, o h n, h ...|(50,[0,1,2,3,4,5,...|\n", "|John also likes t...|[j, o, h, n, a, l...|[j o h, o h n, h ...|(50,[0,1,2,3,4,5,...|\n", "+--------------------+--------------------+--------------------+--------------------+\n", "\n" ] } ], "source": [ "# note the dictionaries added to fit() and transform() arguments\n", "(\n", " cv_pipeline.fit(test_df, {ngram.n:3})\n", " .transform(test_df, {count_vectorizer.outputCol: 'new_vectors'})\n", " .show()\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Building a more complex pipeline\n", "\n", "For our language classification we want to use ngrams 1-3. We can build a function that will yield a pipeline with this more complex setup. Our procedure here is like this:\n", "\n", "1. tokenize as before\n", "2. assemble the ngram transformers to yield n=1, n=2, etc columns\n", "3. vectorize using each set of ngrams giving partial vectors\n", "4. assemble the vectors into one complete feature vector" ] }, { "cell_type": "code", "execution_count": 30, "metadata": { "scrolled": true }, "outputs": [], "source": [ "from pyspark.ml.feature import VectorAssembler\n", "\n", "def ngram_vectorize(min_n=1, max_n=1, min_df=1):\n", " \"\"\"Use a range of ngrams to vectorize a corpus\"\"\"\n", " tokenizer = RegexTokenizer(inputCol=\"text\", outputCol=\"tokens\", gaps=False, pattern='\\S')\n", " \n", " ngrams = []\n", " count_vectorizers = []\n", " \n", " for i in range(min_n, max_n+1):\n", " ngrams.append(\n", " NGram(n=i, inputCol='tokens', outputCol='ngrams_'+str(i))\n", " )\n", " count_vectorizers.append(\n", " CountVectorizer(inputCol='ngrams_'+str(i), outputCol='vectors_'+str(i), vocabSize=1000, minDF=min_df)\n", " )\n", " \n", " assembler = VectorAssembler(\n", " inputCols=['vectors_'+str(i) for i in range(min_n, max_n+1)], outputCol='features')\n", " \n", " return Pipeline(stages=[tokenizer] + ngrams + count_vectorizers + [assembler])" ] }, { "cell_type": "code", "execution_count": 31, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+\n", "|features |\n", "+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+\n", "|[6.0,4.0,3.0,2.0,4.0,2.0,4.0,3.0,2.0,2.0,2.0,1.0,2.0,1.0,1.0,1.0,0.0,0.0,0.0,1.0,1.0,4.0,2.0,2.0,2.0,2.0,2.0,1.0,2.0,1.0,2.0,1.0,1.0,1.0,1.0,1.0,2.0,2.0,1.0,1.0,0.0,0.0,0.0,0.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,0.0,1.0,0.0,0.0,0.0,0.0,1.0,1.0,1.0,1.0,1.0,0.0,0.0,1.0,2.0,2.0,2.0,2.0,2.0,1.0,1.0,2.0,1.0,2.0,2.0,2.0,1.0,1.0,1.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,1.0,1.0,0.0,0.0,1.0,1.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,0.0,0.0,1.0,0.0,0.0,1.0,0.0,0.0,0.0,0.0,0.0,1.0,1.0,0.0]|\n", "|[5.0,3.0,3.0,4.0,2.0,4.0,1.0,1.0,2.0,1.0,0.0,1.0,0.0,1.0,1.0,1.0,1.0,1.0,1.0,0.0,0.0,2.0,1.0,1.0,1.0,1.0,1.0,1.0,0.0,1.0,0.0,1.0,1.0,1.0,1.0,1.0,0.0,0.0,1.0,1.0,2.0,1.0,1.0,1.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,1.0,0.0,1.0,1.0,1.0,1.0,0.0,0.0,0.0,0.0,0.0,1.0,1.0,0.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,0.0,1.0,0.0,0.0,0.0,1.0,1.0,1.0,1.0,1.0,0.0,1.0,0.0,1.0,0.0,0.0,0.0,1.0,1.0,0.0,0.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,1.0,1.0,1.0,0.0,1.0,1.0,0.0,1.0,1.0,1.0,1.0,1.0,0.0,0.0,1.0]|\n", "+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+\n", "\n" ] } ], "source": [ "ngram_vectorize(1,3).fit(test_df).transform(test_df).select('features').show(truncate=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Preparing the DataFrames and models\n", "\n", "For our language classifier we will use just two languages (English and either German or French). We need to create a DataFrame that is filtered to just include those languages. \n", "\n", "In addition, we will need this step of transforming raw string documents into vectors when we try the classifier on new data. We should therefore save the fitted NGram model for later. " ] }, { "cell_type": "code", "execution_count": 32, "metadata": { "scrolled": true }, "outputs": [], "source": [ "lang_df = df.filter(df.language.isin('en', 'de', 'fr')).cache()" ] }, { "cell_type": "code", "execution_count": 33, "metadata": { "scrolled": true }, "outputs": [], "source": [ "ngram_model = ngram_vectorize(1,3, min_df=10).fit(lang_df)" ] }, { "cell_type": "code", "execution_count": 34, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/plain": [ "Row(features=SparseVector(2142, {0: 1734.0, 1: 896.0, 2: 609.0, 3: 588.0, 4: 570.0, 5: 473.0, 6: 495.0, 7: 502.0, 8: 414.0, 9: 485.0, 10: 321.0, 11: 291.0, 12: 235.0, 13: 162.0, 14: 158.0, 15: 142.0, 16: 138.0, 17: 205.0, 18: 143.0, 19: 146.0, 20: 112.0, 21: 61.0, 22: 60.0, 23: 10.0, 24: 6.0, 25: 8.0, 26: 5.0, 28: 4.0, 29: 3.0, 30: 4.0, 31: 2.0, 32: 3.0, 33: 4.0, 34: 5.0, 35: 4.0, 36: 1.0, 37: 1.0, 142: 361.0, 143: 259.0, 144: 244.0, 145: 223.0, 146: 214.0, 147: 181.0, 148: 141.0, 149: 150.0, 150: 141.0, 151: 91.0, 152: 99.0, 153: 116.0, 154: 117.0, 155: 97.0, 156: 90.0, 157: 79.0, 158: 110.0, 159: 131.0, 160: 154.0, 161: 96.0, 162: 80.0, 163: 68.0, 164: 79.0, 165: 58.0, 166: 70.0, 167: 84.0, 168: 65.0, 169: 76.0, 170: 53.0, 171: 76.0, 172: 57.0, 173: 66.0, 174: 71.0, 175: 76.0, 176: 44.0, 177: 77.0, 178: 45.0, 179: 60.0, 180: 80.0, 181: 56.0, 182: 59.0, 183: 46.0, 184: 46.0, 185: 59.0, 186: 48.0, 187: 42.0, 188: 25.0, 189: 75.0, 190: 52.0, 191: 60.0, 192: 49.0, 193: 28.0, 194: 33.0, 195: 47.0, 196: 48.0, 197: 34.0, 198: 41.0, 199: 50.0, 200: 54.0, 201: 64.0, 202: 32.0, 203: 27.0, 204: 61.0, 205: 57.0, 206: 40.0, 207: 25.0, 208: 40.0, 209: 32.0, 210: 23.0, 211: 67.0, 212: 26.0, 213: 20.0, 214: 37.0, 215: 28.0, 216: 35.0, 217: 42.0, 218: 30.0, 219: 42.0, 220: 30.0, 221: 38.0, 222: 23.0, 223: 22.0, 224: 30.0, 225: 17.0, 226: 63.0, 227: 45.0, 228: 31.0, 229: 23.0, 230: 13.0, 231: 25.0, 232: 32.0, 233: 30.0, 234: 32.0, 235: 39.0, 236: 13.0, 237: 30.0, 238: 27.0, 239: 35.0, 240: 18.0, 241: 26.0, 242: 39.0, 243: 18.0, 244: 20.0, 245: 26.0, 246: 28.0, 247: 36.0, 248: 34.0, 249: 31.0, 250: 30.0, 251: 22.0, 252: 19.0, 253: 16.0, 254: 14.0, 255: 38.0, 256: 14.0, 257: 19.0, 258: 11.0, 259: 14.0, 260: 17.0, 261: 17.0, 262: 13.0, 263: 7.0, 264: 14.0, 265: 19.0, 266: 15.0, 267: 25.0, 268: 9.0, 269: 15.0, 270: 23.0, 271: 23.0, 272: 15.0, 273: 18.0, 274: 19.0, 275: 31.0, 276: 6.0, 277: 8.0, 278: 9.0, 279: 18.0, 280: 14.0, 281: 19.0, 282: 9.0, 283: 7.0, 284: 16.0, 285: 12.0, 286: 19.0, 287: 17.0, 288: 10.0, 289: 32.0, 290: 13.0, 291: 22.0, 292: 21.0, 293: 34.0, 294: 9.0, 295: 17.0, 296: 12.0, 297: 6.0, 298: 6.0, 299: 10.0, 300: 14.0, 301: 9.0, 302: 5.0, 303: 9.0, 304: 15.0, 305: 13.0, 306: 21.0, 307: 19.0, 308: 9.0, 309: 14.0, 310: 5.0, 311: 19.0, 312: 8.0, 313: 9.0, 314: 11.0, 315: 17.0, 316: 17.0, 317: 14.0, 318: 5.0, 319: 14.0, 320: 20.0, 321: 4.0, 322: 6.0, 323: 13.0, 324: 13.0, 325: 14.0, 326: 18.0, 327: 14.0, 328: 12.0, 329: 4.0, 330: 8.0, 331: 5.0, 332: 6.0, 333: 3.0, 334: 15.0, 335: 7.0, 336: 11.0, 337: 7.0, 338: 7.0, 339: 7.0, 340: 13.0, 341: 10.0, 342: 10.0, 343: 8.0, 344: 8.0, 345: 4.0, 346: 9.0, 347: 9.0, 348: 5.0, 349: 6.0, 350: 12.0, 351: 5.0, 352: 17.0, 353: 7.0, 354: 15.0, 355: 4.0, 356: 6.0, 357: 8.0, 358: 5.0, 359: 12.0, 360: 8.0, 361: 4.0, 362: 12.0, 363: 18.0, 364: 1.0, 365: 11.0, 366: 9.0, 367: 13.0, 368: 2.0, 369: 12.0, 370: 10.0, 371: 8.0, 372: 7.0, 373: 8.0, 374: 1.0, 375: 3.0, 376: 4.0, 377: 8.0, 378: 6.0, 379: 5.0, 380: 8.0, 381: 6.0, 382: 5.0, 383: 11.0, 384: 9.0, 386: 8.0, 387: 5.0, 388: 6.0, 389: 5.0, 390: 8.0, 391: 3.0, 392: 4.0, 393: 6.0, 394: 3.0, 395: 2.0, 396: 6.0, 397: 2.0, 398: 2.0, 399: 6.0, 400: 6.0, 401: 4.0, 402: 6.0, 403: 5.0, 404: 5.0, 405: 4.0, 406: 2.0, 407: 17.0, 408: 7.0, 409: 3.0, 410: 5.0, 411: 3.0, 412: 4.0, 413: 11.0, 414: 2.0, 415: 10.0, 416: 3.0, 417: 1.0, 418: 3.0, 419: 7.0, 420: 5.0, 421: 2.0, 422: 5.0, 423: 2.0, 425: 1.0, 426: 3.0, 427: 5.0, 428: 2.0, 429: 3.0, 430: 4.0, 431: 1.0, 432: 1.0, 433: 7.0, 434: 2.0, 435: 2.0, 436: 6.0, 437: 4.0, 439: 1.0, 440: 7.0, 442: 2.0, 443: 5.0, 444: 3.0, 445: 2.0, 446: 4.0, 447: 7.0, 448: 1.0, 449: 1.0, 450: 1.0, 452: 1.0, 453: 3.0, 454: 7.0, 455: 2.0, 456: 2.0, 457: 1.0, 459: 6.0, 462: 2.0, 463: 1.0, 464: 7.0, 466: 2.0, 467: 1.0, 470: 2.0, 471: 3.0, 472: 1.0, 473: 3.0, 474: 1.0, 475: 2.0, 476: 1.0, 477: 3.0, 478: 2.0, 479: 1.0, 480: 1.0, 481: 4.0, 482: 2.0, 487: 3.0, 488: 1.0, 489: 2.0, 491: 1.0, 492: 1.0, 494: 5.0, 496: 1.0, 499: 2.0, 505: 3.0, 506: 1.0, 508: 2.0, 511: 1.0, 513: 5.0, 514: 2.0, 520: 3.0, 522: 1.0, 523: 2.0, 524: 2.0, 526: 2.0, 528: 1.0, 529: 1.0, 535: 2.0, 536: 1.0, 537: 1.0, 538: 1.0, 550: 10.0, 554: 2.0, 565: 1.0, 566: 1.0, 569: 1.0, 571: 4.0, 576: 2.0, 584: 2.0, 585: 1.0, 586: 1.0, 588: 1.0, 589: 3.0, 594: 2.0, 595: 5.0, 603: 1.0, 613: 1.0, 615: 1.0, 616: 3.0, 618: 2.0, 619: 1.0, 639: 1.0, 640: 1.0, 665: 1.0, 670: 1.0, 675: 1.0, 676: 1.0, 687: 1.0, 689: 1.0, 701: 2.0, 709: 2.0, 712: 1.0, 743: 1.0, 749: 1.0, 751: 1.0, 752: 1.0, 756: 1.0, 784: 1.0, 793: 1.0, 807: 1.0, 918: 1.0, 946: 2.0, 1142: 174.0, 1143: 131.0, 1144: 147.0, 1145: 58.0, 1146: 42.0, 1147: 57.0, 1148: 56.0, 1149: 40.0, 1150: 38.0, 1151: 48.0, 1152: 46.0, 1153: 26.0, 1154: 25.0, 1155: 29.0, 1156: 28.0, 1157: 39.0, 1158: 31.0, 1159: 28.0, 1160: 56.0, 1161: 26.0, 1162: 33.0, 1163: 47.0, 1164: 34.0, 1165: 33.0, 1166: 35.0, 1167: 32.0, 1168: 21.0, 1169: 67.0, 1170: 22.0, 1171: 26.0, 1172: 30.0, 1173: 38.0, 1174: 18.0, 1175: 33.0, 1176: 37.0, 1177: 7.0, 1178: 39.0, 1179: 12.0, 1180: 34.0, 1181: 8.0, 1182: 23.0, 1183: 28.0, 1184: 11.0, 1185: 15.0, 1186: 12.0, 1187: 34.0, 1188: 17.0, 1189: 9.0, 1190: 41.0, 1191: 37.0, 1192: 24.0, 1193: 19.0, 1194: 34.0, 1195: 11.0, 1196: 29.0, 1197: 16.0, 1198: 14.0, 1199: 27.0, 1200: 28.0, 1201: 18.0, 1202: 27.0, 1203: 44.0, 1204: 41.0, 1205: 11.0, 1206: 26.0, 1207: 27.0, 1208: 11.0, 1209: 10.0, 1210: 22.0, 1211: 16.0, 1212: 16.0, 1213: 20.0, 1214: 16.0, 1215: 10.0, 1216: 25.0, 1217: 26.0, 1218: 15.0, 1219: 20.0, 1220: 19.0, 1221: 18.0, 1222: 14.0, 1223: 27.0, 1224: 10.0, 1225: 25.0, 1226: 23.0, 1227: 5.0, 1228: 20.0, 1229: 18.0, 1230: 6.0, 1231: 19.0, 1232: 15.0, 1233: 15.0, 1234: 21.0, 1235: 17.0, 1236: 11.0, 1237: 15.0, 1238: 19.0, 1239: 12.0, 1240: 19.0, 1241: 11.0, 1242: 22.0, 1243: 20.0, 1244: 12.0, 1245: 16.0, 1246: 16.0, 1247: 17.0, 1248: 14.0, 1249: 12.0, 1250: 11.0, 1251: 20.0, 1252: 21.0, 1253: 12.0, 1254: 15.0, 1255: 10.0, 1256: 9.0, 1257: 15.0, 1258: 10.0, 1259: 16.0, 1260: 15.0, 1261: 3.0, 1262: 8.0, 1263: 11.0, 1264: 10.0, 1265: 6.0, 1266: 6.0, 1267: 9.0, 1268: 21.0, 1269: 6.0, 1270: 6.0, 1271: 21.0, 1272: 19.0, 1273: 18.0, 1274: 15.0, 1275: 11.0, 1276: 14.0, 1277: 14.0, 1278: 19.0, 1279: 15.0, 1280: 18.0, 1281: 20.0, 1282: 21.0, 1283: 11.0, 1284: 22.0, 1285: 9.0, 1286: 10.0, 1287: 26.0, 1288: 8.0, 1289: 12.0, 1290: 21.0, 1291: 9.0, 1292: 14.0, 1293: 2.0, 1294: 3.0, 1295: 20.0, 1296: 10.0, 1297: 12.0, 1298: 9.0, 1299: 5.0, 1300: 14.0, 1301: 17.0, 1302: 17.0, 1303: 14.0, 1304: 14.0, 1305: 14.0, 1306: 5.0, 1307: 6.0, 1308: 3.0, 1309: 14.0, 1310: 7.0, 1311: 8.0, 1312: 24.0, 1313: 7.0, 1314: 7.0, 1315: 4.0, 1316: 24.0, 1317: 9.0, 1318: 9.0, 1319: 18.0, 1320: 18.0, 1321: 7.0, 1322: 6.0, 1323: 7.0, 1324: 3.0, 1325: 8.0, 1326: 15.0, 1327: 9.0, 1328: 17.0, 1329: 5.0, 1330: 7.0, 1331: 10.0, 1332: 12.0, 1333: 6.0, 1334: 6.0, 1335: 9.0, 1336: 10.0, 1337: 15.0, 1338: 19.0, 1339: 9.0, 1340: 3.0, 1341: 8.0, 1342: 7.0, 1343: 18.0, 1344: 4.0, 1345: 6.0, 1346: 2.0, 1347: 20.0, 1348: 4.0, 1349: 15.0, 1350: 11.0, 1351: 14.0, 1352: 10.0, 1353: 4.0, 1354: 12.0, 1355: 10.0, 1356: 8.0, 1357: 14.0, 1358: 8.0, 1359: 4.0, 1360: 7.0, 1361: 7.0, 1362: 5.0, 1363: 9.0, 1364: 10.0, 1365: 3.0, 1366: 6.0, 1367: 10.0, 1368: 4.0, 1369: 12.0, 1370: 17.0, 1371: 12.0, 1372: 9.0, 1373: 10.0, 1374: 6.0, 1375: 7.0, 1376: 8.0, 1377: 7.0, 1378: 8.0, 1379: 3.0, 1380: 11.0, 1381: 8.0, 1382: 6.0, 1383: 6.0, 1384: 19.0, 1385: 5.0, 1386: 6.0, 1387: 7.0, 1388: 4.0, 1389: 12.0, 1390: 7.0, 1391: 3.0, 1392: 3.0, 1393: 17.0, 1394: 8.0, 1395: 10.0, 1396: 11.0, 1397: 8.0, 1398: 1.0, 1399: 8.0, 1400: 7.0, 1401: 7.0, 1402: 8.0, 1403: 12.0, 1404: 13.0, 1405: 3.0, 1406: 6.0, 1407: 6.0, 1408: 3.0, 1409: 8.0, 1410: 3.0, 1411: 8.0, 1412: 2.0, 1413: 6.0, 1414: 5.0, 1415: 4.0, 1416: 2.0, 1417: 9.0, 1418: 2.0, 1419: 4.0, 1420: 9.0, 1421: 7.0, 1422: 14.0, 1423: 2.0, 1424: 6.0, 1425: 4.0, 1426: 2.0, 1427: 8.0, 1428: 2.0, 1429: 7.0, 1430: 5.0, 1431: 5.0, 1432: 10.0, 1433: 5.0, 1434: 6.0, 1435: 7.0, 1436: 6.0, 1437: 10.0, 1438: 4.0, 1439: 10.0, 1440: 8.0, 1441: 2.0, 1442: 14.0, 1443: 5.0, 1444: 10.0, 1445: 7.0, 1446: 7.0, 1447: 8.0, 1448: 3.0, 1449: 2.0, 1450: 6.0, 1451: 7.0, 1452: 6.0, 1453: 3.0, 1454: 8.0, 1455: 7.0, 1456: 8.0, 1457: 6.0, 1458: 13.0, 1459: 7.0, 1460: 4.0, 1461: 5.0, 1462: 2.0, 1463: 1.0, 1464: 15.0, 1465: 3.0, 1466: 2.0, 1467: 10.0, 1468: 10.0, 1470: 4.0, 1471: 9.0, 1472: 6.0, 1473: 9.0, 1474: 4.0, 1475: 4.0, 1476: 4.0, 1477: 8.0, 1478: 5.0, 1479: 15.0, 1480: 3.0, 1481: 7.0, 1482: 3.0, 1483: 6.0, 1484: 5.0, 1485: 4.0, 1486: 6.0, 1487: 1.0, 1488: 2.0, 1489: 5.0, 1490: 8.0, 1491: 13.0, 1492: 2.0, 1493: 19.0, 1494: 6.0, 1495: 4.0, 1496: 10.0, 1497: 3.0, 1498: 18.0, 1499: 2.0, 1500: 15.0, 1501: 2.0, 1502: 3.0, 1503: 9.0, 1504: 11.0, 1505: 4.0, 1506: 3.0, 1507: 4.0, 1508: 8.0, 1509: 7.0, 1510: 3.0, 1511: 7.0, 1512: 4.0, 1513: 5.0, 1514: 22.0, 1515: 4.0, 1516: 2.0, 1517: 1.0, 1518: 5.0, 1519: 9.0, 1520: 2.0, 1521: 3.0, 1522: 8.0, 1523: 3.0, 1524: 10.0, 1525: 11.0, 1526: 8.0, 1527: 6.0, 1528: 8.0, 1529: 7.0, 1530: 7.0, 1531: 8.0, 1532: 10.0, 1533: 5.0, 1534: 2.0, 1535: 5.0, 1536: 1.0, 1537: 9.0, 1538: 5.0, 1539: 1.0, 1540: 4.0, 1542: 11.0, 1543: 4.0, 1544: 2.0, 1545: 2.0, 1546: 2.0, 1547: 4.0, 1548: 2.0, 1549: 4.0, 1550: 3.0, 1551: 11.0, 1552: 8.0, 1553: 2.0, 1554: 5.0, 1555: 5.0, 1556: 3.0, 1557: 1.0, 1558: 7.0, 1559: 4.0, 1560: 2.0, 1561: 5.0, 1562: 2.0, 1564: 5.0, 1565: 10.0, 1566: 3.0, 1567: 5.0, 1568: 1.0, 1569: 4.0, 1570: 7.0, 1571: 10.0, 1572: 5.0, 1573: 3.0, 1574: 6.0, 1575: 3.0, 1576: 10.0, 1577: 1.0, 1578: 10.0, 1579: 3.0, 1580: 2.0, 1581: 8.0, 1582: 1.0, 1583: 4.0, 1584: 5.0, 1585: 8.0, 1586: 1.0, 1587: 6.0, 1588: 3.0, 1589: 2.0, 1590: 2.0, 1591: 9.0, 1592: 7.0, 1593: 2.0, 1594: 7.0, 1595: 3.0, 1596: 1.0, 1597: 2.0, 1598: 3.0, 1599: 5.0, 1600: 3.0, 1601: 3.0, 1602: 3.0, 1603: 5.0, 1605: 3.0, 1606: 4.0, 1607: 3.0, 1608: 2.0, 1609: 12.0, 1610: 11.0, 1611: 8.0, 1612: 4.0, 1613: 1.0, 1614: 7.0, 1615: 7.0, 1616: 4.0, 1617: 7.0, 1618: 2.0, 1619: 7.0, 1620: 8.0, 1621: 2.0, 1622: 3.0, 1623: 3.0, 1624: 10.0, 1626: 1.0, 1627: 7.0, 1628: 2.0, 1629: 3.0, 1630: 3.0, 1631: 5.0, 1632: 12.0, 1633: 4.0, 1634: 4.0, 1635: 1.0, 1636: 5.0, 1637: 9.0, 1638: 1.0, 1640: 18.0, 1641: 3.0, 1642: 2.0, 1643: 1.0, 1644: 2.0, 1645: 4.0, 1646: 4.0, 1647: 9.0, 1648: 3.0, 1649: 3.0, 1650: 2.0, 1651: 2.0, 1653: 3.0, 1654: 1.0, 1655: 4.0, 1656: 6.0, 1658: 3.0, 1659: 7.0, 1660: 8.0, 1661: 2.0, 1662: 3.0, 1663: 5.0, 1664: 4.0, 1665: 7.0, 1666: 5.0, 1667: 7.0, 1668: 7.0, 1669: 3.0, 1670: 6.0, 1671: 1.0, 1672: 1.0, 1673: 5.0, 1674: 2.0, 1675: 11.0, 1676: 5.0, 1677: 3.0, 1678: 10.0, 1679: 3.0, 1680: 2.0, 1681: 3.0, 1682: 10.0, 1683: 2.0, 1684: 3.0, 1685: 2.0, 1686: 5.0, 1687: 11.0, 1688: 3.0, 1689: 1.0, 1691: 5.0, 1692: 3.0, 1693: 4.0, 1694: 5.0, 1695: 7.0, 1696: 3.0, 1697: 4.0, 1698: 6.0, 1699: 6.0, 1700: 1.0, 1701: 1.0, 1702: 8.0, 1703: 3.0, 1704: 2.0, 1705: 1.0, 1706: 3.0, 1707: 6.0, 1708: 3.0, 1709: 2.0, 1710: 1.0, 1711: 1.0, 1712: 6.0, 1713: 1.0, 1714: 2.0, 1715: 4.0, 1717: 9.0, 1718: 3.0, 1719: 1.0, 1720: 3.0, 1721: 2.0, 1723: 3.0, 1724: 11.0, 1725: 2.0, 1726: 2.0, 1727: 8.0, 1728: 4.0, 1729: 1.0, 1730: 4.0, 1731: 4.0, 1732: 8.0, 1733: 1.0, 1734: 8.0, 1735: 5.0, 1736: 3.0, 1738: 5.0, 1739: 2.0, 1741: 1.0, 1742: 2.0, 1743: 5.0, 1744: 3.0, 1746: 2.0, 1747: 2.0, 1748: 8.0, 1749: 1.0, 1750: 3.0, 1751: 2.0, 1752: 2.0, 1753: 7.0, 1754: 3.0, 1755: 3.0, 1756: 5.0, 1757: 1.0, 1758: 1.0, 1759: 5.0, 1760: 10.0, 1761: 2.0, 1762: 6.0, 1763: 3.0, 1764: 2.0, 1765: 1.0, 1766: 7.0, 1768: 2.0, 1769: 1.0, 1770: 6.0, 1771: 1.0, 1772: 2.0, 1773: 4.0, 1774: 5.0, 1775: 1.0, 1776: 2.0, 1777: 3.0, 1778: 3.0, 1779: 2.0, 1780: 4.0, 1781: 3.0, 1782: 3.0, 1783: 4.0, 1784: 10.0, 1785: 2.0, 1786: 5.0, 1787: 2.0, 1788: 3.0, 1789: 5.0, 1790: 4.0, 1791: 7.0, 1792: 6.0, 1793: 2.0, 1794: 1.0, 1795: 4.0, 1796: 5.0, 1797: 2.0, 1798: 4.0, 1799: 5.0, 1800: 3.0, 1801: 4.0, 1802: 2.0, 1803: 3.0, 1804: 2.0, 1805: 3.0, 1806: 1.0, 1808: 4.0, 1809: 4.0, 1810: 3.0, 1811: 5.0, 1812: 11.0, 1813: 3.0, 1814: 10.0, 1815: 4.0, 1816: 5.0, 1817: 4.0, 1818: 2.0, 1820: 2.0, 1821: 2.0, 1822: 2.0, 1823: 3.0, 1824: 1.0, 1825: 3.0, 1826: 1.0, 1827: 4.0, 1828: 3.0, 1829: 2.0, 1830: 3.0, 1831: 4.0, 1832: 1.0, 1833: 3.0, 1834: 5.0, 1835: 2.0, 1836: 2.0, 1837: 2.0, 1838: 1.0, 1839: 2.0, 1840: 1.0, 1841: 4.0, 1842: 4.0, 1844: 1.0, 1845: 3.0, 1846: 1.0, 1847: 2.0, 1848: 5.0, 1849: 5.0, 1850: 1.0, 1851: 3.0, 1852: 4.0, 1853: 5.0, 1854: 3.0, 1855: 2.0, 1856: 4.0, 1858: 2.0, 1859: 3.0, 1862: 1.0, 1863: 5.0, 1864: 7.0, 1865: 4.0, 1866: 2.0, 1867: 2.0, 1868: 2.0, 1869: 4.0, 1870: 1.0, 1871: 4.0, 1872: 1.0, 1873: 4.0, 1874: 1.0, 1876: 1.0, 1877: 1.0, 1878: 3.0, 1879: 2.0, 1880: 1.0, 1881: 1.0, 1882: 4.0, 1883: 5.0, 1884: 3.0, 1885: 2.0, 1886: 5.0, 1887: 3.0, 1888: 7.0, 1889: 2.0, 1890: 1.0, 1891: 1.0, 1892: 2.0, 1893: 3.0, 1895: 1.0, 1896: 1.0, 1897: 1.0, 1898: 1.0, 1900: 7.0, 1901: 3.0, 1902: 1.0, 1903: 1.0, 1904: 4.0, 1905: 4.0, 1907: 1.0, 1908: 2.0, 1909: 1.0, 1910: 1.0, 1911: 5.0, 1912: 2.0, 1913: 3.0, 1914: 3.0, 1915: 1.0, 1916: 4.0, 1917: 2.0, 1918: 2.0, 1920: 1.0, 1921: 2.0, 1922: 3.0, 1923: 6.0, 1924: 1.0, 1925: 5.0, 1926: 3.0, 1927: 3.0, 1928: 3.0, 1929: 1.0, 1930: 4.0, 1931: 4.0, 1933: 4.0, 1935: 2.0, 1936: 2.0, 1937: 4.0, 1938: 2.0, 1939: 1.0, 1940: 1.0, 1941: 3.0, 1942: 2.0, 1943: 5.0, 1944: 3.0, 1945: 1.0, 1946: 4.0, 1948: 3.0, 1953: 1.0, 1954: 2.0, 1955: 1.0, 1956: 1.0, 1957: 2.0, 1959: 3.0, 1960: 3.0, 1961: 2.0, 1962: 1.0, 1963: 4.0, 1964: 3.0, 1965: 2.0, 1966: 3.0, 1967: 4.0, 1968: 1.0, 1969: 3.0, 1970: 1.0, 1971: 6.0, 1972: 1.0, 1973: 1.0, 1974: 3.0, 1975: 2.0, 1976: 1.0, 1977: 3.0, 1980: 1.0, 1981: 2.0, 1983: 1.0, 1984: 4.0, 1986: 1.0, 1987: 4.0, 1988: 4.0, 1989: 2.0, 1990: 1.0, 1991: 6.0, 1993: 3.0, 1994: 6.0, 1999: 9.0, 2001: 1.0, 2002: 3.0, 2003: 2.0, 2004: 1.0, 2005: 4.0, 2006: 2.0, 2007: 6.0, 2008: 6.0, 2009: 3.0, 2010: 1.0, 2011: 3.0, 2013: 1.0, 2014: 5.0, 2018: 2.0, 2019: 2.0, 2020: 6.0, 2021: 4.0, 2022: 1.0, 2026: 4.0, 2027: 1.0, 2028: 4.0, 2029: 2.0, 2030: 3.0, 2031: 3.0, 2035: 4.0, 2037: 1.0, 2040: 8.0, 2041: 1.0, 2043: 2.0, 2044: 1.0, 2045: 1.0, 2046: 2.0, 2049: 3.0, 2050: 9.0, 2051: 1.0, 2052: 2.0, 2054: 2.0, 2055: 2.0, 2056: 4.0, 2058: 2.0, 2059: 1.0, 2060: 4.0, 2061: 1.0, 2062: 1.0, 2063: 2.0, 2064: 2.0, 2065: 4.0, 2066: 3.0, 2067: 7.0, 2069: 2.0, 2070: 1.0, 2071: 5.0, 2073: 4.0, 2074: 6.0, 2075: 3.0, 2076: 1.0, 2077: 3.0, 2079: 4.0, 2080: 1.0, 2081: 1.0, 2082: 2.0, 2083: 3.0, 2084: 3.0, 2087: 1.0, 2088: 6.0, 2089: 1.0, 2090: 4.0, 2091: 4.0, 2092: 1.0, 2093: 4.0, 2095: 1.0, 2096: 3.0, 2097: 1.0, 2098: 2.0, 2099: 1.0, 2100: 2.0, 2102: 4.0, 2104: 1.0, 2105: 2.0, 2106: 1.0, 2107: 1.0, 2108: 1.0, 2109: 1.0, 2110: 4.0, 2111: 2.0, 2112: 2.0, 2113: 5.0, 2114: 1.0, 2116: 4.0, 2117: 1.0, 2118: 1.0, 2119: 3.0, 2120: 1.0, 2121: 2.0, 2122: 4.0, 2126: 2.0, 2129: 2.0, 2131: 1.0, 2132: 1.0, 2133: 2.0, 2134: 3.0, 2135: 1.0, 2136: 2.0, 2137: 1.0, 2138: 5.0, 2139: 1.0, 2140: 3.0, 2141: 8.0}))" ] }, "execution_count": 34, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ngram_model.transform(lang_df).select('features').first()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Building the classifier\n", "\n", "We have successfully transformed the dataset into a representation that we can (almost) feed into a classifier. What we need still is a label column as well the final stage of the pipeline that will fit the actual model. \n", "\n", "To generate labels from the language column, we will use the `StringIndexer` as a part of our pipeline. For the classification we will use the simplest possible `LogisticRegression` -- once you've convinced yourself that you know how it works, go ahead and experiment with other [classifiers](http://spark.apache.org/docs/latest/api/python/pyspark.ml#module-pyspark.ml.classification)." ] }, { "cell_type": "code", "execution_count": 35, "metadata": { "scrolled": true }, "outputs": [], "source": [ "from pyspark.ml.classification import LogisticRegression\n", "from pyspark.ml.feature import StringIndexer" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**TODO:** Set up a `classification_pipeline`. Use the N-gram model we defined above as a starting stage, followed by a `StringIndexer` and a `LogisticRegression` classifier. Make sure you read the documentation on these!\n", "\n", "Note that we can use the pre-trained N-gram model -- the `Pipeline` will automatically infer that the stage is already complete and will only use it in the transformation step. " ] }, { "cell_type": "code", "execution_count": 36, "metadata": { "scrolled": true }, "outputs": [], "source": [ "classification_pipeline = Pipeline(\n", " stages=[ngram_model, \n", " StringIndexer(inputCol='language', outputCol='label'),\n", " LogisticRegression(regParam=0.002, elasticNetParam=1, maxIter=10)\n", " ]\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Run the classifier! The fitting will take a while -- you may want to run this first on a subset of the data" ] }, { "cell_type": "code", "execution_count": 50, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Split the training and test sets\n", "training, test = lang_df.sample(True, 0.2).randomSplit([0.8,0.2])" ] }, { "cell_type": "code", "execution_count": 51, "metadata": { "scrolled": true }, "outputs": [], "source": [ "classifier = classification_pipeline.fit(training)" ] }, { "cell_type": "code", "execution_count": 52, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Predictions for en\n", "+-----+---------------------------------------------------------------+----------+\n", "|label|probability |prediction|\n", "+-----+---------------------------------------------------------------+----------+\n", "|0.0 |[0.9988005907780809,7.88610734359324E-4,4.1079848755980176E-4] |0.0 |\n", "|0.0 |[0.998917107716628,7.33096968988312E-4,3.4979531438355903E-4] |0.0 |\n", "|0.0 |[0.9989542530960642,6.742910478926779E-4,3.7145585604316547E-4]|0.0 |\n", "|0.0 |[0.9979812116689254,0.0014214961543689822,5.972921767057092E-4]|0.0 |\n", "|0.0 |[0.9976958073687584,0.00162556839444595,6.786242367955829E-4] |0.0 |\n", "|0.0 |[0.99886258717009,7.499241334248441E-4,3.874886964851842E-4] |0.0 |\n", "|0.0 |[0.9990570458254298,6.331701031364094E-4,3.097840714338575E-4] |0.0 |\n", "|0.0 |[0.9983657577006078,0.001131550441907882,5.026918574843947E-4] |0.0 |\n", "|0.0 |[0.9990957214984193,5.871605212797993E-4,3.171179803008901E-4] |0.0 |\n", "|0.0 |[0.9988794707082886,7.60572521225022E-4,3.599567704864347E-4] |0.0 |\n", "+-----+---------------------------------------------------------------+----------+\n", "only showing top 10 rows\n", "\n", "Predictions for fr\n", "+-----+---------------------------------------------------------------+----------+\n", "|label|probability |prediction|\n", "+-----+---------------------------------------------------------------+----------+\n", "|1.0 |[0.008381708795528764,0.9903705265768796,0.0012477646275914497]|1.0 |\n", "|1.0 |[0.0072177041215181915,0.991513236861732,0.0012690590167496693]|1.0 |\n", "|1.0 |[0.011648740444191029,0.9868082815571253,0.0015429779986834936]|1.0 |\n", "|1.0 |[0.007994017426962919,0.9905439591790249,0.0014620233940122497]|1.0 |\n", "|1.0 |[0.013962646252511346,0.9840939277409182,0.0019434260065705788]|1.0 |\n", "|1.0 |[0.0045897370501745745,0.9945341549407803,8.761080090450625E-4]|1.0 |\n", "|1.0 |[0.01352432004674269,0.9846904574008984,0.0017852225523587082] |1.0 |\n", "|1.0 |[0.009157581140411543,0.9895636363993151,0.0012787824602733784]|1.0 |\n", "|1.0 |[0.005768877194304736,0.993079725839248,0.0011513969664472647] |1.0 |\n", "|1.0 |[0.004522374406022942,0.9946308643338839,8.46761260093219E-4] |1.0 |\n", "+-----+---------------------------------------------------------------+----------+\n", "only showing top 10 rows\n", "\n", "Predictions for de\n", "+-----+---------------------------------------------------------------+----------+\n", "|label|probability |prediction|\n", "+-----+---------------------------------------------------------------+----------+\n", "|2.0 |[0.007115480140553636,0.0028805819781134176,0.9900039378813329]|2.0 |\n", "|2.0 |[0.015104963762786888,0.004766436885516083,0.9801285993516969] |2.0 |\n", "|2.0 |[0.004030386755147294,0.0021455501616592775,0.9938240630831934]|2.0 |\n", "|2.0 |[0.005008596363494801,0.002405650939289787,0.9925857526972155] |2.0 |\n", "|2.0 |[0.0201085123501476,0.008925132235816025,0.9709663554140364] |2.0 |\n", "|2.0 |[0.01128220958798443,0.004317611704032041,0.9844001787079836] |2.0 |\n", "|2.0 |[0.004481288640557262,0.002486584391140174,0.9930321269683026] |2.0 |\n", "|2.0 |[0.01444117656076587,0.006115299276470507,0.9794435241627637] |2.0 |\n", "|2.0 |[0.023183212010339314,0.00753809057249261,0.9692786974171681] |2.0 |\n", "|2.0 |[0.003909491485698821,0.0032784056473365844,0.9928121028669645]|2.0 |\n", "+-----+---------------------------------------------------------------+----------+\n", "only showing top 10 rows\n", "\n" ] } ], "source": [ "# check the predictions \n", "for lang in ['en', 'fr', 'de']:\n", " print('Predictions for {0}'.format(lang))\n", " (classifier.transform(\n", " test.filter(test.language == lang))\n", " .select('label', 'probability', 'prediction')\n", " .show(10, truncate=False))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You should be seeing mostly good agreement between `label` and `prediction`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Improving the model and continuing the exploration of the data\n", "\n", "We have completed the basic model training, but many improvements are possible. One obvious improvement is hyperparameter tuning -- check out the [docs](http://spark.apache.org/docs/latest/ml-tuning.html#ml-tuning-model-selection-and-hyperparameter-tuning) for some examples and try it out!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Some other ideas for things you could do with this dataset: \n", "\n", "* try other [classifiers that are included in MLlib](http://spark.apache.org/docs/latest/mllib-classification-regression.html)\n", "* build a regression model to predict year of publication (may be better with word ngrams)\n", "* do clustering on the english books and see if sub-groups of the language pop up\n", "* cluster by author -- do certain authors write in similar ways?" ] }, { "cell_type": "code", "execution_count": 53, "metadata": { "scrolled": true }, "outputs": [], "source": [ "spark.stop()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.1" } }, "nbformat": 4, "nbformat_minor": 2 }