We can sort the tasks by start time and then attempt to complete them in that order.
#include <bits/stdc++.h> #include <climits> using namespace std; using ll = long long; int main() { cin.tie(0)->sync_with_stdio(0); int T; cin >> T; for (int t = 0; t < T; ++t) { int N; cin >> N; vector<pair<ll, ll>> tasks; for (int n = 0; n < N; ++n) { ll s, t; cin >> s >> t; tasks.push_back({s, t}); } sort(begin(tasks), end(tasks)); ll time_so_far = 0; int ans = 0; for (auto [s, t] : tasks) { if (time_so_far <= s) { ++ans; time_so_far += t; } } cout << ans << "\n"; } }
Here we no longer assume all ti are equal. Let's first consider the problem of checking whether the answer is N or not; that is, whether all the jobs can be completed.
We need to decide what order to complete the jobs in. One possibility is to complete the jobs in increasing order of si, but this turns out to be incorrect when the ti are not all equal. Instead, it turns out that if all the jobs can be completed, they can be completed in increasing order of si+ti.
Claim: Suppose we complete job a and then job b immediately afterward where sa+ta≥sb+tb. Then we can swap the order of jobs a and b while still satisfying both of their start times.
Proof: Suppose job a is started at time T. Then T≤sb−ta. If we swap the order of jobs a and b then we need T≤sb and T≤sa−tb. We have both T≤sb−ta≤sb and T≤sb−ta≤sa−tb by the assumption, done.
Define da=sa+ta and suppose we relabel the jobs such that d1≤d2≤⋯≤dN. Suppose there exists some order allowing all jobs to be completed. Then we can repeatedly apply the claim to transform this order into 1…N. Thus, all jobs can be completed if and only if ∑ij=1tj≤di for all i.
As in the checking problem, let's relabel such that d1≤d2≤⋯≤dN and assume we always complete jobs in increasing order of label. To solve it in O(N2) time, we can use dynamic programming; let dp[i][j] be the minimum total time to complete j of the first i jobs, or ∞ if impossible. To transition from i to i+1 we can choose either to add the i+1'st job or not; if so, we need to check that the total time does not exceed di+1. The answer is the maximum j such that dp[N][j]≠∞.
#include <bits/stdc++.h> #include <climits> using namespace std; using ll = long long; int main() { cin.tie(0)->sync_with_stdio(0); int T; cin >> T; for (int t = 0; t < T; ++t) { int N; cin >> N; vector<pair<ll, ll>> tasks; for (int n = 0; n < N; ++n) { ll s, t; cin >> s >> t; ll d = s + t; tasks.push_back({d, t}); } sort(begin(tasks), end(tasks)); vector<ll> dp{0}; for (auto [d, t] : tasks) { dp.push_back(LLONG_MAX); for (int i = size(dp) - 2; i >= 0; --i) dp.at(i + 1) = min(dp.at(i + 1), dp.at(i) + t); while (dp.back() > d) dp.pop_back(); } cout << size(dp) - 1 << "\n"; } }
To solve the original problem in O(NlogN), we can use a greedy strategy. Maintain a set Si⊆[1,i] such that all of Si can be completed and the final jobs we select will be a subset of Si∪{i+1…i+N}. The base case is S0=∅. Then for each i∈[1,N], initialize Si as Si−1∪{i}. However, if ∑j∈Sitj>di, then not all of Si can be completed, and we need to remove at least one job from Si. We should greedily remove the job j with the maximum tj from Si. Since the di are in increasing order and removing the last job is always an option, we will have ∑j∈Sitj≤di after the removal, so no additional removals are required. Si can be maintained with a priority queue, and the final answer will be |SN|.
#include <bits/stdc++.h> using namespace std; using ll = long long; int main() { cin.tie(0)->sync_with_stdio(0); int T; cin >> T; for (int t = 0; t < T; ++t) { int N; cin >> N; vector<pair<ll, ll>> tasks; for (int n = 0; n < N; ++n) { ll s, t; cin >> s >> t; ll d = s + t; tasks.push_back({d, t}); } sort(begin(tasks), end(tasks)); ll sum = 0; priority_queue<ll> pq; for (auto [d, t] : tasks) { pq.push(t); sum += t; if (sum > d) { sum -= pq.top(); pq.pop(); } } cout << size(pq) << "\n"; } }
Why is it okay to remove job j? If it turns out that there is an optimal solution s⊂Si−1∪{i,i+1,…,N} that uses j, then there is some job in Si that is not part of s. In this case, we can remove j from s and add some job from Si to s, while preserving the size of s. All jobs in s with index at most i can be completed since they will be a subset of Si, and the start times of jobs in s with index greater than i will stay the same or go down, so they also can be completed. So the size of the optimal solution remains the same after removing job j.
Note: The greedy strategy can alternatively be interpreted as an acceleration of the DP solution, since the times in Si sorted in increasing order will match with the consecutive differences dp[i][1]−dp[i][0],dp[i][2]−dp[i][1],…,dp[i][|Si|]−dp[i][|Si|−1].