Table of Contents

ExplanationImplementation

Official Editorial (C++, Python)

Explanation

The official editorial covers a linear time approach, but quadratic is sufficient for full credit.

This problem certainly is a doozy, though. The way to think about it is to pin down what you want to set as constant, then see how the others can be defined in terms of the things that you've already set.

In this case, the constants we'll brute force over are the number of transitive and intransitive verbs, which I'll call TVs and ITVs for brevity. The intuitive reason for this is that each sentence, not counting conjunctions, has exactly one verb. Thus, if we know how many verbs we're going to use, we can know how many nouns, commas, etc. we're going to use as well.

TVs and ITVs need two and one noun respectively, so we have to check if we have enough nouns first. If we use a nonzero number of TVs, we can use commas to append nouns at the end of a TV sentence.

We also try to use as many conjunctions as possible; if there's ss sentences, we can use a maximum of s2\left\lfloor\frac{s}{2}\right\rfloor conjunctions, since each sentence can be joined at most one time.

The rest is formatting; no algorithmic tricks are required.

Implementation

Time Complexity: O(N2)\mathcal{O}(N^2)

C++

#include <algorithm>
#include <iostream>
#include <map>
#include <vector>
using namespace std;
int main() {
int test_num;
cin >> test_num;

Java

import java.io.*;
import java.util.*;
public class MooLanguage {
public static void main(String[] args) throws IOException {
BufferedReader read = new BufferedReader(new InputStreamReader(System.in));
int testNum = Integer.parseInt(read.readLine());
for (int t = 0; t < testNum; t++) {
StringTokenizer initial = new StringTokenizer(read.readLine());
int wordNum = Integer.parseInt(initial.nextToken());

Python

for _ in range(int(input())):
word_num, comma_num, period_num = [int(i) for i in input().split()]
words = {
"noun": [],
"transitive-verb": [],
"intransitive-verb": [],
"conjunction": [],
}
for _ in range(word_num):
token, type_ = input().split()

Join the USACO Forum!

Stuck on a problem, or don't understand a module? Join the USACO Forum and get help from other competitive programmers!