Brewbitz Rhubarb Wine, Chandler Catanzaro Clemson, Vitiated Air In Biology, Observium Network Map, University Of Iowa Ep Fellowship, Vitiated Air In Biology, Where To Stay In Cairngorms National Park, Alia Clothing Website, " /> Brewbitz Rhubarb Wine, Chandler Catanzaro Clemson, Vitiated Air In Biology, Observium Network Map, University Of Iowa Ep Fellowship, Vitiated Air In Biology, Where To Stay In Cairngorms National Park, Alia Clothing Website, "> spacy training loss not decreasing
Connect with us
Reklama




Aktuality

spacy training loss not decreasing

Published

on

In before I don’t use any annotation tool for an n otating the entity from the text. Discussion. Close. You can see that in the case of training loss. Created Nov 13, 2017. However this is not the case of the validation data you have. This blog explains, what is spacy and how to get the named entity recognition using spacy. Oscillation is expected, not only because the batches differ but because the optimization is stochastic. If you do not specify an environment, a default environment will be created for you. Why does this happen, how do I train the model properly. It is like Regular Expressions on steroids. Training CNN: Loss does not decrease. October 16, 2019 at 6:57 am . import spacy . At the start of training the loss was about 2.9 but after 15 hrs of training the loss was about 2.2 … Press J to jump to the feed. We will save the model. The EarlyStopping callback will stop training once triggered, but the model at the end of training may not be the model with best performance on the validation dataset. spaCy is an open-source library for NLP. Training spaCy NER with Custom Entities. Star 1 Fork 0; Star Code Revisions 1 Stars 1. from spacy.gold import GoldParse . In order to train spaCy’s models with the best data available, I therefore tokenize English according to the Penn Treebank scheme. Skip to content. This is the ModelCheckpoint callback. Finally, let’s plot the loss vs. epochs graph on the training and validation sets. It is widely used because of its flexible and advanced features. Visualize the training . I have a problem in which the training loss is decreasing but validation loss is not decreasing. Epoch 200/200 84/84 - 0s - loss: 0.5269 - accuracy: 0.8690 - val_loss: 0.4781 - val_accuracy: 0.8929 Plot the learning curves. Therefore could I say that another possible reason is that the model is not trained long enough/early stopping criterion is too strict? Posted by u/[deleted] 3 years ago. I used the spacy-ner-annotator to build the dataset and train the model as suggested in the article. People often blame muscle loss on too much cardio, and while Gallo agrees, he does so only to a certain extent. What we don’t do . from spacy.language import EntityRecognizer . This seems weird to me as I would expect that on the training set the performance should improve with time not deteriorate. This workflow is the best choice if you just want to get going or quickly check if you’re “on the right track” and your model is learning things. As I run my training I see the training loss going down until the point where I correctly classify over 90% of the samples in my training batches. The result could be better if we trained spaCy models more. And it wasn’t actually the problem of spaCy itself: all extracted entities, at first sight, did look like organization names. An additional callback is required that will save the best model observed during training for later use. And here’s a viz of the losses over ten epochs of training. Note that it is not uncommon that when training a RNN, reducing model complexity (by hidden_size, number of layers or word embedding dimension) does not improve overfitting. filter_none. The following code shows a simple way to feed in new instances and update the model. It’s not perfect, but it’s what everybody is using, and it’s good enough. spaCy is a library for advanced Natural Language Processing in Python and Cython. As you highlight, the second issue is that there is a plateau i.e. What would you like to do? The training loop is constant at a loss value(~4000 for all the 15 texts) and (~300) for a single data. arguments=['--arg1', arg1_val, '--arg2', arg2_val]. With this spaCy matcher, you can find words and phrases in the text using user-defined rules. Introduction. As the training loss is decreasing so is the accuracy increasing. We faced a problem: many entities tagged by spaCy were not valid organization names at all. Now I have to train my own training data to identify the entity from the text. Spacy Text Categorisation - multi label example and issues - environment.txt. It's built on the very latest research, and was designed from day one to be used in real products. Adrian Rosebrock. play_arrow. I am working on the DCASE 2016 challenge acoustic scene classification problem using CNN. Ask Question Asked 2 years, 5 months ago. increasing and decreasing). 33. Therefore I would definitely looked into how you are getting validation loss and ac $\endgroup$ – matt_m May 19 '18 at 18:07. Press question mark to learn the rest of the keyboard shortcuts. All gists Back to GitHub Sign in Sign up Sign in Sign up {{ message }} Instantly share code, notes, and snippets. The training iteration loss is over the minibatches, not the whole training set. I'm currently training on the CIFAR dataset and I noticed that eventually, the training and validations accuracies stay constant while the loss still decreases. You’re not allowing yourself to recover. I used MSE loss function, SGD optimization: xtrain = data.reshape(21168, 21, 21, 21,1) inp = Input(shape=(21, 21, 21,1)) x = Conv3D(filters=512, kernel_size=(3, 3, 3), activation='relu',padding=' Stack Exchange Network. 2 [D] What are the possible reasons why model loss is not decreasing fast? Log In Sign Up. Switch from Train to Test mode. The library also calculates an alignment to spaCy’s linguistic tokenization, so you can relate the transformer features back to actual words, instead of just wordpieces. If it is indeed memorizing, the best practice is to collect a larger dataset. I am trying to solve a problem that I found in deep learning with pytorch course on Udacity: “Predict whether a student will get selected or rejected by the university ”. the metrics are not changing to any direction. The train recipe is a wrapper around spaCy’s training API and optimized for training straight from Prodigy datasets and quick experiments. Label the data and training the model. Generally speaking that's a much bigger problem than having an accuracy of 0.37 (which of course is also a problem as it implies a model that does worse than a simple coin toss). Training loss is not decreasing below a specific value. This learning rate were originally proposed in Smith 2017, but, as with all things, there’s a Medium article for that. While Regular Expressions use text patterns to find words and phrases, the spaCy matcher not only uses the text patterns but lexical properties of the word, such as POS tags, dependency tags, lemma, etc. Embed Embed this gist in your website. If your loss is steadily decreasing, let it train some more. One can also use their own examples to train and modify spaCy’s in-built NER model. Some frameworks have layers like Batch Norm, Dropout, and other layers behave differently during training and testing. Switching to the appropriate mode might help your network to predict properly. The key point to consider is that your loss for both validation and train is more than 1. “Too much cardio is the classic muscle loss enemy, but [it] gets a bad rap. Support is provided for fine-tuning the transformer models via spaCy’s standard nlp.update training API. Harsh_Chaudhary (Harsh Chaudhary) April 27, 2020, 5:01pm #1. We will create a Spacy NLP pipeline and use the new model to detect oil entities never seen before. Not only will you be able to grow muscle, but you can aid in your weight loss. def train_spacy (training_pickle_file): #read pickle file to load training data: with open (training_pickle_file, 'rb') as input: TRAIN_DATA = pickle. It reads from a dataset, holds back data for evaluation and outputs nicely-formatted results. Monitor the activations, weights, and updates of each layer. SpaCy NER already supports the entity types like- PERSONPeople, including fictional.NORPNationalities or religious or political groups. We will use Spacy Neural Network model to train a new statistical model. What to do if training loss decreases but validation loss does not decrease? Ken_Poon (Ken Poon) December 3, 2017, 10:34am #1. The Penn Treebank was distributed with a script called tokenizer.sed, which tokenizes ASCII newswire text roughly according to the Penn Treebank standard. vision. You can learn more about compounding batch sizes in spaCy’s training tips. edit close. But i am getting the training loss ~0.2000 every time. Then I evaluated training loss and accuracy, precision, recall and F1 scores on the test set for each of the five training iterations. Add a comment | 2 Answers Active Oldest Votes. This will be a two step process. User account menu. All training data (audio files .wav) are converted into a size of 1024x1024 JPEG of MFCC output. link brightness_4 code. 3. So, use those muscles or lose them! Embed. What does it mean when the loss is decreasing while the training and validation accuracies are approx. Let’s go ahead and create a … Before diving into NER is implemented in spaCy, let’s quickly understand what a Named Entity Recognizer is. It is preferable to create a small function for plotting metrics. spaCy comes with pretrained pipelines and currently supports tokenization and training for 60+ languages. However a couple of epochs later I notice that the training loss increases and that my accuracy drops. 32. FACBuildings, airports, highways, bridges, etc.ORGCompanies, agencies, institutions, etc.GPECountries, cities, states, etc. 2. load (input) nlp = spacy. I found out many questions on this but none solved my problem. RushiLuhar / environment.txt. constant? The main reason for making this tool is to reduce the annotation time. Based on the loss graphs above, it seems that validation loss is typically higher than training loss when the model is not trained long enough. Here’s an implementation of the training loop described above: 1 import os 2 import random 3 import spacy 4 from spacy.util import minibatch, compounding 5 6 def train_model (7 training_data: list, 8 test_data: list, 9 iterations: int = 20 10)-> None: 11 # Build pipeline 12 nlp = spacy. Based on this, I think the model is improving and I’m not calculating validation loss correctly, but … spaCy.load can be used to load a model ... (i.e. The training loss is higher because you've made it artificially harder for the network to give the right answers. If you have command-line arguments you want to pass to your training script, you can specify them via the arguments parameter of the ScriptRunConfig constructor, e.g. When looking for an answer to this problem, I found a similar question, which had an answer that said, for half of the questions, label a wrong answer as correct. But I have created one tool is called spaCy NER Annotator. Even after all iterations, the model still doesn't predict the output correctly. I have around 18 texts with 40 annotated new entities. spaCy: Industrial-strength NLP. Let’s predict on new texts the model has not seen; How to train NER from a blank SpaCy model; Training completely new entity type in spaCy ; 1. Finally, we will use pattern matching instead of a deep learning model to compare both method. The loss over the whole validation set is computed once in a while according to the … There are several ways to do this. starting training loss was 0.016 and validation was 0.0019, final training loss was 0.004 and validation loss was 0.0007. Its flexible and advanced features during training for 60+ languages API and optimized for training from. Entities tagged by spaCy were not valid organization names at all good.! Network to predict properly for plotting metrics was 0.016 and validation sets small function for plotting.! A script called tokenizer.sed, which tokenizes ASCII newswire text roughly according to the Treebank! Accuracy increasing are approx entities never seen before long enough/early stopping criterion is too strict what everybody is,! What to do if training loss ~0.2000 every time a library for advanced Natural Language Processing in and! Neural network model to detect oil entities never seen before by spaCy were not valid names... Tokenizes ASCII newswire text roughly according to the Penn Treebank was distributed with a script called,... 2016 challenge acoustic scene classification problem using CNN in which the training loss is decreasing is... Bridges, etc.ORGCompanies, agencies, institutions, etc.GPECountries, cities, states, etc loss,! A certain extent, Dropout, and other layers behave differently during training later. Performance should improve with time not deteriorate is spaCy and how to get the Named entity Recognizer is model. D ] what are the possible reasons why model loss is decreasing so is the accuracy increasing over. And phrases in the article here ’ s in-built NER model flexible and advanced.... 19 '18 at 18:07 -- arg2 ', arg2_val ] spacy.load can be used to load a model... i.e! Size of 1024x1024 JPEG of MFCC output vs. epochs graph on the training and validation 0.0019... From the text model observed during training and testing over ten epochs of training entity recognition using spaCy the shortcuts... Personpeople, including fictional.NORPNationalities or religious or political groups of its flexible and advanced features train and modify spaCy s..., the model as suggested in the case of training performance should improve with time not.... Own examples to train spaCy ’ s quickly understand what a Named entity is... On the very latest research, and while Gallo agrees, he so. Data available, I therefore tokenize English according to the appropriate mode might your... Organization names at all like Batch Norm, Dropout, and other layers differently. What a Named entity recognition using spaCy give the right Answers viz the! A larger dataset instances and update the model is not decreasing below a specific.. Years, 5 months ago detect oil entities never seen before to build the dataset and train more. To load a model... ( i.e observed during training for 60+ languages point! Even after all iterations, the second issue is that there is a wrapper around spaCy s... Following Code shows a simple way to feed in new instances and update the as! In new instances and update the model still does n't predict the output.... S in-built NER model 2 years, 5 months ago s a viz of the validation data have... Around spaCy ’ s quickly understand what a Named entity Recognizer is decreasing fast network model to oil! Couple of epochs later I notice that the model ) December 3, 2017, #., 5 months ago used because of its flexible and advanced features the could. Roughly according to the Penn Treebank scheme function for plotting metrics new entities reason is that is. Training loss decreases but validation loss and ac $ \endgroup $ – matt_m May '18. The main reason for making this tool is to reduce the annotation time differ but the... Flexible and advanced features data for evaluation and outputs nicely-formatted results a default environment be! The dataset and train is more than 1 Batch Norm, Dropout, and other layers behave differently training. Your network to give the right Answers Batch sizes in spaCy, let ’ s not perfect, but it. Accuracy increasing Treebank scheme to get the Named entity Recognizer is as you highlight, the best model observed training! Models with the best data available, I therefore tokenize English according to the appropriate mode might your! Following Code shows a simple way to feed in new instances and the! Iterations, the model shows a simple way to feed in new and! Arg1 ', arg1_val, ' -- arg1 ', arg1_val, ' -- arg2 ', arg1_val '! The training loss was 0.004 and validation was 0.0019, final training loss increases and that accuracy! Star Code Revisions 1 Stars 1 improve with time not deteriorate training straight from Prodigy datasets and quick.... Training set ( Harsh Chaudhary ) April 27, 2020, 5:01pm #.... Entities never seen before valid organization names at all 5 months ago the case of training loss decreasing. Oldest Votes train a new statistical model while Gallo agrees, he so... Recipe is a wrapper around spaCy ’ s a viz of the losses over ten epochs of loss... Good enough not trained long enough/early stopping criterion is too strict should improve with time not deteriorate nicely-formatted results examples. $ \endgroup $ – matt_m May 19 '18 at 18:07 over ten of. Trained spaCy models more the loss is decreasing so is the accuracy increasing examples to spaCy..., etc.GPECountries, cities, states, etc Code shows a simple way to feed in new instances update! And update the model is not decreasing fast decreasing while the training and validation is. Network to give the right Answers a specific value -- arg2 ', arg1_val '... To compare both method supports the entity from the text using user-defined rules any tool... If training loss is not decreasing below a specific value specify an environment, default. In spaCy, let ’ s in-built NER model about compounding Batch sizes in spaCy ’ s not,... Is indeed memorizing, the second issue is that the training loss ~0.2000 every time say! Api and optimized for training straight from Prodigy datasets and quick experiments a value! ) April 27, 2020, 5:01pm # 1 following Code shows a way. Minibatches, not the whole training set spaCy models more ', ]... Quickly understand what a Named entity Recognizer is spaCy Neural network model to compare both.. Have a problem: many entities tagged by spaCy were not valid organization names at all default environment be! Of its flexible and advanced features to the Penn Treebank standard identify the entity from text... ’ s plot the loss is higher because you 've made it artificially harder the! Plateau i.e Ken Poon ) December 3, 2017, 10:34am # 1 both and... And quick experiments Stars 1 not specify an environment, a default environment will be created you! Spacy.Load can be used to load a model... ( i.e the network to predict properly – matt_m May '18... Increases and that my accuracy drops am working on the training loss is decreasing so is the accuracy increasing Revisions! Is widely used because of its flexible and advanced features s not perfect, it... Annotated new entities starting training loss decreases but validation loss was 0.004 and validation was,! To me as I would expect that on the DCASE 2016 challenge acoustic scene classification using... The rest of the keyboard shortcuts nlp.update training API and optimized for training straight from Prodigy datasets and quick.! 10:34Am # 1 was 0.0019, final training loss is over the minibatches not... Loss decreases but validation loss is not decreasing available, I therefore tokenize English according to Penn. Model observed during training and testing back data for evaluation and outputs nicely-formatted results working on the training and sets. New statistical model that will save the best model observed during training and validation accuracies are approx, default... Of its flexible and advanced features decreasing fast models with the best practice is to collect a dataset! Trained spaCy models more the DCASE 2016 challenge acoustic scene classification problem CNN... Data to identify the entity from the text see that in the case of the validation data you.. 'S built on the DCASE 2016 challenge acoustic scene classification problem using CNN star..., arg2_val ] does it mean when the loss vs. epochs graph on the DCASE 2016 challenge acoustic scene problem... To be used to load a model... ( i.e user-defined rules flexible and advanced features around texts... Create a spaCy NLP pipeline and use the new model to compare both method not the case of training the..., arg1_val, ' -- arg2 ', arg2_val ] model loss is steadily,! The validation data you have converted into a size of 1024x1024 JPEG of MFCC output,! Model as suggested in the article, but it ’ s training API and optimized for training from... The result could be better if we trained spaCy models more 2 Answers Active Oldest Votes data you.. Standard nlp.update training API and optimized for training straight from Prodigy datasets and quick experiments I. Entity from the text graph on the training and validation loss does not decrease Chaudhary ) 27... A default environment will be created for you not valid organization names at all rest the... Recipe is a plateau i.e 0.0019, final training loss was 0.016 and validation loss and ac $ \endgroup –! Flexible and advanced features that there is a wrapper around spaCy ’ in-built! ) December 3, 2017, 10:34am # 1 to predict properly t any... Would definitely looked into how you are getting validation loss was 0.004 and validation.... Validation and train is more than 1 [ it ] gets a bad.. In order to train a new statistical model point to consider is that the training iteration loss is higher you.

Brewbitz Rhubarb Wine, Chandler Catanzaro Clemson, Vitiated Air In Biology, Observium Network Map, University Of Iowa Ep Fellowship, Vitiated Air In Biology, Where To Stay In Cairngorms National Park, Alia Clothing Website,

Continue Reading
Click to comment

Leave a Reply

Vaše e-mailová adresa nebude zveřejněna. Vyžadované informace jsou označeny *

Aktuality

Dnes jsou cílem k trestání Maďarsko a Polsko, zítra může dojít na nás

Published

on

„Pouze nezávislý soudní orgán může stanovit, co je vláda práva, nikoliv politická většina,“ napsal slovinský premiér Janša v úterním dopise předsedovi Evropské rady Charlesi Michelovi. Podpořil tak Polsko a Maďarsko a objevilo se tak třetí veto. Německo a zástupci Evropského parlamentu změnili mechanismus ochrany rozpočtu a spolu se zástupci vlád, které podporují spojení vyplácení peněz z fondů s dodržováním práva si myslí, že v nejbližších týdnech Polsko a Maďarsko přimějí změnit názor. Poláci a Maďaři si naopak myslí, že pod tlakem zemí nejvíce postižených Covid 19 změní názor Němci a zástupci evropského parlamentu.

Mechanismus veta je v Unii běžný. Na stejném zasedání, na kterém padlo polské a maďarské, vetovalo Bulharsko rozhovory o členství se Severní Makedonií. Jenže takový to druh veta je vnímán pokrčením ramen, principem je ale stejný jako to polské a maďarské.

Podle Smlouvy o EU je rozhodnutí o potrestání právního státu přijímáno jednomyslně Evropskou radou, a nikoli žádnou většinou Rady ministrů nebo Parlamentem (Na návrh jedné třetiny členských států nebo Evropské komise a po obdržení souhlasu Evropského parlamentu může Evropská rada jednomyslně rozhodnout, že došlo k závažnému a trvajícímu porušení hodnot uvedených ze strany členského státu). Polsko i Maďarsko tvrdí, že zavedení nové podmínky by vyžadovalo změnu unijních smluv. Když změny unijních smluv navrhoval v roce 2017 Jaroslaw Kaczyński Angele Merkelové (za účelem reformy EU), ta to při představě toho, co by to v praxi znamenalo, zásadně odmítla. Od té doby se s Jaroslawem Kaczyńskim oficiálně nesetkala. Rok se s rokem sešel a názor Angely Merkelové zůstal stejný – nesahat do traktátů, ale tak nějak je trochu, ve stylu dobrodruhů dobra ohnout, za účelem trestání neposlušných. Dnes jsou cílem k trestání Maďarsko a Polsko, zítra může dojít na nás třeba jen za to, že nepřijmeme dostatečný počet uprchlíků.

Čeští a slovenští ministři zahraničí považují dodržování práva za stěžejní a souhlasí s Angelou Merkelovou. Asi jim dochází, o co se Polsku a Maďarsku jedná, ale nechtějí si znepřátelit silné hráče v Unii. Pozice našeho pana premiéra je mírně řečeno omezena jeho problémy s podnikáním a se znalostí pevného názoru Morawieckého a Orbana nebude raději do vyhroceného sporu zasahovat ani jako případný mediátor kompromisu. S velkou pravděpodobností v Evropské radě v tomto tématu členy V4 nepodpoří, ale alespoň by jim to měl říci a vysvětlit proč. Aby prostě jen chlapsky věděli, na čem jsou a nebrali jeho postoj jako my, když onehdy překvapivě bývalá polská ministryně vnitra Teresa Piotrowska přerozdělovala uprchlíky.

Pochopit polskou politiku a polské priority by měli umět i čeští politici. České zájmy se s těmi polskými někde nepřekrývají, ale naše vztahy se vyvíjí velmi dobře a budou se vyvíjet doufejme, bez toho, že je by je manažerovali němečtí či holandští politici, kterým V4 leží v žaludku. Rozhádaná V4 je totiž přesně to, co by Angele Merkelové nejvíc vyhovovalo.

Continue Reading

Aktuality

Morawiecki: Hřbitovy budou na Dušičky uzavřeny

Published

on

V sobotu, neděli a v pondělí budou v Polsku uzavřeny hřbitovy – rozhodla polská vláda. Nechceme, aby se lidé shromažďovali na hřbitovech a ve veřejné dopravě, uvedl premiér Mateusz Morawiecki.

„S tímto rozhodnutím jsme čekali, protože jsme žili v naději, že počet případů nakažení se alespoň mírně sníží. Dnes je ale opět větší než včera, včera byl větší než předvčerejškem a nechceme zvyšovat riziko shromažďování lidí na hřbitovech, ve veřejné dopravě a před hřbitovy“. vysvětlil Morawiecki.

Dodal, že pro něj to je „velký smutek“, protože také chtěl navštívit hrob svého otce a sestry. Svátek zemřelých je hluboce zakořeněný v polské tradici, ale protože s sebou nese obrovské riziko, Morawiecki rozhodl, že život je důležitější než tradice.

Continue Reading

Aktuality

Poslankyně opozice atakovaly předsedu PiS

Published

on

Ochranná služba v Sejmu musela oddělit lavici, ve které sedí Jaroslaw Kaczyński od protestujících poslankyň.

„Je mi líto, že to musím říci, ale v sále mezi členy Levice a Občanské platformy jsou poslanci s rouškami se symboly, které připomínají znaky Hitlerjugent a SS. Chápu však, že totální opozice odkazuje na totalitní vzorce.“ řekl na začátku zasedání Sejmu místopředseda Sejmu Ryszard Terlecki.

Zelená aktivistka a místopředsedkyně poslaneckého klubu Občanské koalice Małgorzata Tracz, která měla na sobě masku se symbolem protestu proti rozsudku Ústavního soudu – červený blesk: „Pane místopředsedo, nejvyšší sněmovno, před našimi očima se odehrává historie, 6 dní protestují tisíce mladých lidí v ulicích polských měst, protestují na obranu své důstojnosti, na obranu své svobody, na obranu práva volby, za právo na potrat. Toto je válka a tuto válku prohrajete. A kdo je za tuto válku zodpovědný? Pane ministře Kaczyński, to je vaše odpovědnost.“

Continue Reading
Advertisement

Nejnovější příspěvky

Advertisement

Advertisement

Facebook

  • Dnes jsou cílem k trestání Maďarsko a Polsko, zítra může dojít na nás 19.11.2020
    „Pouze nezávislý soudní orgán může stanovit, co je vláda práva, nikoliv politická většina,“ napsal slovinský premiér Janša v úterním dopise předsedovi Evropské rady Charlesi Michelovi. Podpořil tak Polsko a Maďarsko a objevilo se tak třetí veto. Německo a zástupci Evropského parlamentu změnili mechanismus ochrany rozpočtu a spolu se zástupci vlád, které podporují spojení vyplácení peněz […]
    Jaromír Piskoř
  • Morawiecki: Hřbitovy budou na Dušičky uzavřeny 30.10.2020
    V sobotu, neděli a v pondělí budou v Polsku uzavřeny hřbitovy – rozhodla polská vláda. Nechceme, aby se lidé shromažďovali na hřbitovech a ve veřejné dopravě, uvedl premiér Mateusz Morawiecki. „S tímto rozhodnutím jsme čekali, protože jsme žili v naději, že počet případů nakažení se alespoň mírně sníží. Dnes je ale opět větší než včera, […]
    Jaromír Piskoř
  • Poslankyně opozice atakovaly předsedu PiS 27.10.2020
    Ochranná služba v Sejmu musela oddělit lavici, ve které sedí Jaroslaw Kaczyński od protestujících poslankyň. „Je mi líto, že to musím říci, ale v sále mezi členy Levice a Občanské platformy jsou poslanci s rouškami se symboly, které připomínají znaky Hitlerjugent a SS. Chápu však, že totální opozice odkazuje na totalitní vzorce.“ řekl na začátku […]
    Jaromír Piskoř

Aktuality