mirror of
https://framagit.org/veretcle/oolatoocs.git
synced 2025-07-21 13:24:18 +02:00
Compare commits
18 Commits
Author | SHA1 | Date | |
---|---|---|---|
![]() |
8674048e8d | ||
![]() |
378d973697 | ||
![]() |
2cb732efed | ||
![]() |
5d685b5748 | ||
![]() |
66664ff621 | ||
![]() |
fd84730bdc | ||
![]() |
692f4ff040 | ||
![]() |
3397416a93 | ||
![]() |
f782987991 | ||
![]() |
26788f9d37 | ||
![]() |
ca9b388a50 | ||
![]() |
42958e0a92 | ||
![]() |
77be17e7bf | ||
![]() |
bd9fd27fd1 | ||
![]() |
3e6cae6136 | ||
![]() |
f10baa3eb2 | ||
![]() |
c113c1472a | ||
![]() |
cdf7dc70c1 |
646
Cargo.lock
generated
646
Cargo.lock
generated
File diff suppressed because it is too large
Load Diff
@@ -1,11 +1,12 @@
|
|||||||
[package]
|
[package]
|
||||||
name = "oolatoocs"
|
name = "oolatoocs"
|
||||||
version = "1.3.1"
|
version = "2.0.0"
|
||||||
edition = "2021"
|
edition = "2021"
|
||||||
|
|
||||||
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
|
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
|
||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
|
chrono = "0.4.31"
|
||||||
clap = "^4"
|
clap = "^4"
|
||||||
env_logger = "^0.10"
|
env_logger = "^0.10"
|
||||||
futures = "^0.3"
|
futures = "^0.3"
|
||||||
@@ -15,7 +16,7 @@ megalodon = "^0.11"
|
|||||||
oauth1-request = "^0.6"
|
oauth1-request = "^0.6"
|
||||||
regex = "^1.10"
|
regex = "^1.10"
|
||||||
reqwest = { version = "^0.11", features = ["json", "stream", "multipart"] }
|
reqwest = { version = "^0.11", features = ["json", "stream", "multipart"] }
|
||||||
rusqlite = "^0.27"
|
rusqlite = { version = "^0.30", features = ["chrono"] }
|
||||||
serde = { version = "^1.0", features = ["derive"] }
|
serde = { version = "^1.0", features = ["derive"] }
|
||||||
tokio = { version = "^1.33", features = ["rt-multi-thread", "macros", "time"] }
|
tokio = { version = "^1.33", features = ["rt-multi-thread", "macros", "time"] }
|
||||||
toml = "^0.8"
|
toml = "^0.8"
|
||||||
|
77
README.md
Normal file
77
README.md
Normal file
@@ -0,0 +1,77 @@
|
|||||||
|
# oolatoocs, a Mastodon to Twitter bot
|
||||||
|
|
||||||
|
So what is it? Originally, I wrote, with some help, [Scootaloo](https://framagit.org/veretcle/scootaloo/) which was a Twitter to Mastodon Bot to help the [writers at NintendojoFR](https://www.nintendojo.fr) not to worry about Mastodon: the vast majority of writers were posting to Twitter, the bot scooped everything and arranged it properly for Mastodon and everything was fine and dandy. It was also used, in an altered beefed-up version, for [Nupes.social](https://nupes.social) to make the tweets from the NUPES political alliance on Twitter, more easily accessible in Mastodon.
|
||||||
|
|
||||||
|
But then Elon came, and we couldn’t read data from Twitter anymore. So we had to rely on copy/pasting things from one to another, which is not fun nor efficient.
|
||||||
|
|
||||||
|
Hence `oolatoocs`, which takes a Mastodon Timeline and reposts it to Twitter as properly as possible.
|
||||||
|
|
||||||
|
# Remarkable features
|
||||||
|
|
||||||
|
What it can do:
|
||||||
|
* Reproduces the Toot content into the Tweet;
|
||||||
|
* Cuts (poorly) the Toot in half in it’s too long for Twitter and thread it (this is cut using a word count, not the best method, but it gets the job done);
|
||||||
|
* Reuploads images/gifs/videos from Mastodon to Twitter
|
||||||
|
* Can reproduce threads from Mastodon to Twitter
|
||||||
|
* Can reproduce poll from Mastodon to Twitter
|
||||||
|
* Can prevent a Toot from being tweeted by using the #NoTweet (case-insensitive) hashtag in Mastodon
|
||||||
|
|
||||||
|
# Configuration file
|
||||||
|
|
||||||
|
The configuration is relatively easy to follow:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[oolatoocs]
|
||||||
|
db_path = "/var/lib/oolatoocs/db.sqlite3" # the path to the DB where toot/tweet are stored
|
||||||
|
|
||||||
|
[mastodon] # This part can be generated, see below
|
||||||
|
base = "https://m.nintendojo.fr"
|
||||||
|
client_id = "<REDACTED>"
|
||||||
|
client_secret = "<REDACTED>"
|
||||||
|
redirect = "urn:ietf:wg:oauth:2.0:oob"
|
||||||
|
token = "<REDACTED>"
|
||||||
|
|
||||||
|
[twitter] # you’ll have to get this part from Twitter, this can be done via https://developer.twitter.com/en
|
||||||
|
consumer_key = "<REDACTED>"
|
||||||
|
consumer_secret = "<REDACTED>"
|
||||||
|
oauth_token = "<REDACTED>"
|
||||||
|
oauth_token_secret = "<REDACTED>"
|
||||||
|
```
|
||||||
|
|
||||||
|
## How to generate the Mastodon keys?
|
||||||
|
|
||||||
|
Just run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
oolatoocs register --host https://<your-instance>
|
||||||
|
```
|
||||||
|
|
||||||
|
And follow the instructions.
|
||||||
|
|
||||||
|
## How to generate the Twitter part?
|
||||||
|
|
||||||
|
You’ll need to generate a key. This is a real pain in the ass, but you can use [this script](https://github.com/twitterdev/Twitter-API-v2-sample-code/blob/main/Manage-Tweets/create_tweet.py), modify it and run it to recover you key.
|
||||||
|
|
||||||
|
Will I some day make a subcommand to generate it? Maybe…
|
||||||
|
|
||||||
|
# How to run
|
||||||
|
|
||||||
|
First of all, the `--help`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
A Mastodon to Twitter Bot
|
||||||
|
|
||||||
|
Usage: oolatoocs [OPTIONS] [COMMAND]
|
||||||
|
|
||||||
|
Commands:
|
||||||
|
init Command to init the DB
|
||||||
|
register Command to register to Mastodon Instance
|
||||||
|
help Print this message or the help of the given subcommand(s)
|
||||||
|
|
||||||
|
Options:
|
||||||
|
-c, --config <CONFIG_FILE> TOML config file for oolatoocs [default: /usr/local/etc/oolatoocs.toml]
|
||||||
|
-h, --help Print help
|
||||||
|
-V, --version Print version
|
||||||
|
```
|
||||||
|
|
||||||
|
Ideally, you’ll put it an cron (from a non-root user), with the default path for config file and let it do its job. Yeah, that’s it.
|
83
src/lib.rs
83
src/lib.rs
@@ -5,20 +5,20 @@ mod config;
|
|||||||
pub use config::{parse_toml, Config};
|
pub use config::{parse_toml, Config};
|
||||||
|
|
||||||
mod state;
|
mod state;
|
||||||
pub use state::init_db;
|
|
||||||
#[allow(unused_imports)]
|
#[allow(unused_imports)]
|
||||||
use state::{read_state, write_state, TweetToToot};
|
use state::{delete_state, read_all_tweet_state, read_state, write_state, TweetToToot};
|
||||||
|
pub use state::{init_db, migrate_db};
|
||||||
|
|
||||||
mod mastodon;
|
mod mastodon;
|
||||||
use mastodon::get_mastodon_timeline_since;
|
|
||||||
pub use mastodon::register;
|
pub use mastodon::register;
|
||||||
|
use mastodon::{get_mastodon_instance, get_mastodon_timeline_since, get_status_edited_at};
|
||||||
|
|
||||||
mod utils;
|
mod utils;
|
||||||
use utils::{generate_multi_tweets, strip_everything};
|
use utils::{generate_multi_tweets, strip_everything};
|
||||||
|
|
||||||
mod twitter;
|
mod twitter;
|
||||||
#[allow(unused_imports)]
|
#[allow(unused_imports)]
|
||||||
use twitter::{generate_media_ids, post_tweet};
|
use twitter::{delete_tweet, generate_media_ids, post_tweet, transform_poll};
|
||||||
|
|
||||||
use rusqlite::Connection;
|
use rusqlite::Connection;
|
||||||
|
|
||||||
@@ -27,15 +27,61 @@ pub async fn run(config: &Config) {
|
|||||||
let conn = Connection::open(&config.oolatoocs.db_path)
|
let conn = Connection::open(&config.oolatoocs.db_path)
|
||||||
.unwrap_or_else(|e| panic!("Cannot open DB: {}", e));
|
.unwrap_or_else(|e| panic!("Cannot open DB: {}", e));
|
||||||
|
|
||||||
let last_toot_id = read_state(&conn, None)
|
let mastodon = get_mastodon_instance(&config.mastodon);
|
||||||
.unwrap_or_else(|e| panic!("Cannot get last toot id: {}", e))
|
|
||||||
.map(|r| r.toot_id);
|
|
||||||
|
|
||||||
let timeline = get_mastodon_timeline_since(&config.mastodon, last_toot_id)
|
let last_entry =
|
||||||
|
read_state(&conn, None).unwrap_or_else(|e| panic!("Cannot get last toot id: {}", e));
|
||||||
|
|
||||||
|
let last_toot_id: Option<u64> = match last_entry {
|
||||||
|
None => None, // Does not exist, this is the same as previously
|
||||||
|
Some(t) => {
|
||||||
|
match get_status_edited_at(&mastodon, t.toot_id).await {
|
||||||
|
None => Some(t.toot_id),
|
||||||
|
Some(d) => {
|
||||||
|
// a date has been found
|
||||||
|
if d > t.datetime.unwrap() {
|
||||||
|
// said date is posterior to the previously
|
||||||
|
// written tweet, we need to delete/rewrite
|
||||||
|
for local_tweet_id in read_all_tweet_state(&conn, t.toot_id)
|
||||||
|
.unwrap_or_else(|e| {
|
||||||
|
panic!(
|
||||||
|
"Cannot fetch all tweets associated with Toot ID {}: {}",
|
||||||
|
t.toot_id, e
|
||||||
|
)
|
||||||
|
})
|
||||||
|
.into_iter()
|
||||||
|
{
|
||||||
|
delete_tweet(&config.twitter, local_tweet_id)
|
||||||
|
.await
|
||||||
|
.unwrap_or_else(|e| {
|
||||||
|
panic!("Cannot delete Tweet ID ({}): {}", t.tweet_id, e)
|
||||||
|
});
|
||||||
|
}
|
||||||
|
delete_state(&conn, t.toot_id).unwrap_or_else(|e| {
|
||||||
|
panic!("Cannot delete Toot ID ({}): {}", t.toot_id, e)
|
||||||
|
});
|
||||||
|
read_state(&conn, None)
|
||||||
|
.unwrap_or_else(|e| panic!("Cannot get last toot id: {}", e))
|
||||||
|
.map(|a| a.toot_id)
|
||||||
|
} else {
|
||||||
|
Some(t.toot_id)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let timeline = get_mastodon_timeline_since(&mastodon, last_toot_id)
|
||||||
.await
|
.await
|
||||||
.unwrap_or_else(|e| panic!("Cannot get instance: {}", e));
|
.unwrap_or_else(|e| panic!("Cannot get instance: {}", e));
|
||||||
|
|
||||||
for toot in timeline {
|
for toot in timeline {
|
||||||
|
// detecting tag #NoTweet and skipping the toot
|
||||||
|
if toot.tags.iter().any(|f| &f.name == "notweet") {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
// form tweet_content and strip everything useless in it
|
||||||
let Ok(mut tweet_content) = strip_everything(&toot.content, &toot.tags) else {
|
let Ok(mut tweet_content) = strip_everything(&toot.content, &toot.tags) else {
|
||||||
continue; // skip in case we can’t strip something
|
continue; // skip in case we can’t strip something
|
||||||
};
|
};
|
||||||
@@ -51,17 +97,33 @@ pub async fn run(config: &Config) {
|
|||||||
// if the toot is too long, we cut it in half here
|
// if the toot is too long, we cut it in half here
|
||||||
if let Some((first_half, second_half)) = generate_multi_tweets(&tweet_content) {
|
if let Some((first_half, second_half)) = generate_multi_tweets(&tweet_content) {
|
||||||
tweet_content = second_half;
|
tweet_content = second_half;
|
||||||
let reply_id = post_tweet(&config.twitter, &first_half, &[], &reply_to)
|
// post the first half
|
||||||
|
let reply_id = post_tweet(&config.twitter, first_half, vec![], reply_to, None)
|
||||||
.await
|
.await
|
||||||
.unwrap_or_else(|e| panic!("Cannot post the first half of {}: {}", &toot.id, e));
|
.unwrap_or_else(|e| panic!("Cannot post the first half of {}: {}", &toot.id, e));
|
||||||
|
// write it to db
|
||||||
|
write_state(
|
||||||
|
&conn,
|
||||||
|
TweetToToot {
|
||||||
|
tweet_id: reply_id,
|
||||||
|
toot_id: toot.id.parse::<u64>().unwrap(),
|
||||||
|
datetime: None,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
.unwrap_or_else(|e| {
|
||||||
|
panic!("Cannot store Toot/Tweet ({}/{}): {}", &toot.id, reply_id, e)
|
||||||
|
});
|
||||||
reply_to = Some(reply_id);
|
reply_to = Some(reply_id);
|
||||||
};
|
};
|
||||||
|
|
||||||
|
// treats poll if any
|
||||||
|
let in_poll = toot.poll.map(|p| transform_poll(&p));
|
||||||
|
|
||||||
// treats medias
|
// treats medias
|
||||||
let medias = generate_media_ids(&config.twitter, &toot.media_attachments).await;
|
let medias = generate_media_ids(&config.twitter, &toot.media_attachments).await;
|
||||||
|
|
||||||
// posts corresponding tweet
|
// posts corresponding tweet
|
||||||
let tweet_id = post_tweet(&config.twitter, &tweet_content, &medias, &reply_to)
|
let tweet_id = post_tweet(&config.twitter, tweet_content, medias, reply_to, in_poll)
|
||||||
.await
|
.await
|
||||||
.unwrap_or_else(|e| panic!("Cannot Tweet {}: {}", toot.id, e));
|
.unwrap_or_else(|e| panic!("Cannot Tweet {}: {}", toot.id, e));
|
||||||
|
|
||||||
@@ -71,6 +133,7 @@ pub async fn run(config: &Config) {
|
|||||||
TweetToToot {
|
TweetToToot {
|
||||||
tweet_id,
|
tweet_id,
|
||||||
toot_id: toot.id.parse::<u64>().unwrap(),
|
toot_id: toot.id.parse::<u64>().unwrap(),
|
||||||
|
datetime: None,
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
.unwrap_or_else(|e| panic!("Cannot store Toot/Tweet ({}/{}): {}", &toot.id, tweet_id, e));
|
.unwrap_or_else(|e| panic!("Cannot store Toot/Tweet ({}/{}): {}", &toot.id, tweet_id, e));
|
||||||
|
20
src/main.rs
20
src/main.rs
@@ -49,6 +49,21 @@ fn main() {
|
|||||||
.display_order(1),
|
.display_order(1),
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
|
.subcommand(
|
||||||
|
Command::new("migrate")
|
||||||
|
.version(env!("CARGO_PKG_VERSION"))
|
||||||
|
.about("Command to register to Mastodon Instance")
|
||||||
|
.arg(
|
||||||
|
Arg::new("config")
|
||||||
|
.short('c')
|
||||||
|
.long("config")
|
||||||
|
.value_name("CONFIG_FILE")
|
||||||
|
.help(format!("TOML config file for {}", env!("CARGO_PKG_NAME")))
|
||||||
|
.num_args(1)
|
||||||
|
.default_value(DEFAULT_CONFIG_PATH)
|
||||||
|
.display_order(1),
|
||||||
|
),
|
||||||
|
)
|
||||||
.get_matches();
|
.get_matches();
|
||||||
|
|
||||||
env_logger::init();
|
env_logger::init();
|
||||||
@@ -63,6 +78,11 @@ fn main() {
|
|||||||
register(sub_m.get_one::<String>("host").unwrap());
|
register(sub_m.get_one::<String>("host").unwrap());
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
Some(("migrate", sub_m)) => {
|
||||||
|
let config = parse_toml(sub_m.get_one::<String>("config").unwrap());
|
||||||
|
migrate_db(&config.oolatoocs.db_path).unwrap();
|
||||||
|
return;
|
||||||
|
}
|
||||||
_ => (),
|
_ => (),
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@@ -1,4 +1,5 @@
|
|||||||
use crate::config::MastodonConfig;
|
use crate::config::MastodonConfig;
|
||||||
|
use chrono::{DateTime, Utc};
|
||||||
use megalodon::{
|
use megalodon::{
|
||||||
entities::{Status, StatusVisibility},
|
entities::{Status, StatusVisibility},
|
||||||
generator,
|
generator,
|
||||||
@@ -10,16 +11,29 @@ use megalodon::{
|
|||||||
use std::error::Error;
|
use std::error::Error;
|
||||||
use std::io::stdin;
|
use std::io::stdin;
|
||||||
|
|
||||||
pub async fn get_mastodon_timeline_since(
|
/// Get Mastodon Object instance
|
||||||
config: &MastodonConfig,
|
pub fn get_mastodon_instance(config: &MastodonConfig) -> Mastodon {
|
||||||
id: Option<u64>,
|
Mastodon::new(
|
||||||
) -> Result<Vec<Status>, Box<dyn Error>> {
|
|
||||||
let mastodon = Mastodon::new(
|
|
||||||
config.base.to_string(),
|
config.base.to_string(),
|
||||||
Some(config.token.to_string()),
|
Some(config.token.to_string()),
|
||||||
None,
|
None,
|
||||||
);
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get the edited_at field from the specified toot
|
||||||
|
pub async fn get_status_edited_at(mastodon: &Mastodon, t: u64) -> Option<DateTime<Utc>> {
|
||||||
|
mastodon
|
||||||
|
.get_status(t.to_string())
|
||||||
|
.await
|
||||||
|
.ok()
|
||||||
|
.and_then(|t| t.json.edited_at)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get the home timeline since the last toot
|
||||||
|
pub async fn get_mastodon_timeline_since(
|
||||||
|
mastodon: &Mastodon,
|
||||||
|
id: Option<u64>,
|
||||||
|
) -> Result<Vec<Status>, Box<dyn Error>> {
|
||||||
let input_options = GetHomeTimelineInputOptions {
|
let input_options = GetHomeTimelineInputOptions {
|
||||||
only_media: Some(false),
|
only_media: Some(false),
|
||||||
limit: None,
|
limit: None,
|
||||||
|
277
src/state.rs
277
src/state.rs
@@ -1,3 +1,4 @@
|
|||||||
|
use chrono::{DateTime, Utc};
|
||||||
use log::debug;
|
use log::debug;
|
||||||
use rusqlite::{params, Connection, OptionalExtension};
|
use rusqlite::{params, Connection, OptionalExtension};
|
||||||
use std::error::Error;
|
use std::error::Error;
|
||||||
@@ -7,6 +8,34 @@ use std::error::Error;
|
|||||||
pub struct TweetToToot {
|
pub struct TweetToToot {
|
||||||
pub tweet_id: u64,
|
pub tweet_id: u64,
|
||||||
pub toot_id: u64,
|
pub toot_id: u64,
|
||||||
|
pub datetime: Option<DateTime<Utc>>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Deletes a given state
|
||||||
|
pub fn delete_state(conn: &Connection, toot_id: u64) -> Result<(), Box<dyn Error>> {
|
||||||
|
debug!("Deleting Toot ID {}", toot_id);
|
||||||
|
conn.execute(
|
||||||
|
&format!("DELETE FROM tweet_to_toot WHERE toot_id = {}", toot_id),
|
||||||
|
[],
|
||||||
|
)?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Retrieves all tweets associated to a toot in the form of a vector
|
||||||
|
pub fn read_all_tweet_state(conn: &Connection, toot_id: u64) -> Result<Vec<u64>, Box<dyn Error>> {
|
||||||
|
let query = format!(
|
||||||
|
"SELECT tweet_id FROM tweet_to_toot WHERE toot_id = {};",
|
||||||
|
toot_id
|
||||||
|
);
|
||||||
|
let mut stmt = conn.prepare(&query)?;
|
||||||
|
let mut rows = stmt.query([])?;
|
||||||
|
|
||||||
|
let mut v = Vec::new();
|
||||||
|
while let Some(row) = rows.next()? {
|
||||||
|
v.push(row.get(0)?);
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(v)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// if None is passed, read the last tweet from DB
|
/// if None is passed, read the last tweet from DB
|
||||||
@@ -17,8 +46,10 @@ pub fn read_state(
|
|||||||
) -> Result<Option<TweetToToot>, Box<dyn Error>> {
|
) -> Result<Option<TweetToToot>, Box<dyn Error>> {
|
||||||
debug!("Reading toot_id {:?}", s);
|
debug!("Reading toot_id {:?}", s);
|
||||||
let query: String = match s {
|
let query: String = match s {
|
||||||
Some(i) => format!("SELECT * FROM tweet_to_toot WHERE toot_id = {i}"),
|
Some(i) => format!(
|
||||||
None => "SELECT * FROM tweet_to_toot ORDER BY toot_id DESC LIMIT 1".to_string(),
|
"SELECT tweet_id, toot_id, UNIXEPOCH(datetime) AS datetime FROM tweet_to_toot WHERE toot_id = {i} ORDER BY tweet_id DESC LIMIT 1"
|
||||||
|
),
|
||||||
|
None => "SELECT tweet_id, toot_id, UNIXEPOCH(datetime) AS datetime FROM tweet_to_toot ORDER BY toot_id DESC LIMIT 1".to_string(),
|
||||||
};
|
};
|
||||||
|
|
||||||
let mut stmt = conn.prepare(&query)?;
|
let mut stmt = conn.prepare(&query)?;
|
||||||
@@ -28,6 +59,7 @@ pub fn read_state(
|
|||||||
Ok(TweetToToot {
|
Ok(TweetToToot {
|
||||||
tweet_id: row.get("tweet_id")?,
|
tweet_id: row.get("tweet_id")?,
|
||||||
toot_id: row.get("toot_id")?,
|
toot_id: row.get("toot_id")?,
|
||||||
|
datetime: Some(DateTime::from_timestamp(row.get("datetime").unwrap(), 0).unwrap()),
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
.optional()?;
|
.optional()?;
|
||||||
@@ -56,8 +88,9 @@ pub fn init_db(d: &str) -> Result<(), Box<dyn Error>> {
|
|||||||
|
|
||||||
conn.execute(
|
conn.execute(
|
||||||
"CREATE TABLE IF NOT EXISTS tweet_to_toot (
|
"CREATE TABLE IF NOT EXISTS tweet_to_toot (
|
||||||
tweet_id INTEGER,
|
tweet_id INTEGER PRIMARY KEY,
|
||||||
toot_id INTEGER PRIMARY KEY
|
toot_id INTEGER,
|
||||||
|
datetime INTEGER DEFAULT CURRENT_TIMESTAMP
|
||||||
)",
|
)",
|
||||||
[],
|
[],
|
||||||
)?;
|
)?;
|
||||||
@@ -65,6 +98,56 @@ pub fn init_db(d: &str) -> Result<(), Box<dyn Error>> {
|
|||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Migrate DB from 1.5.x to 1.6.x
|
||||||
|
pub fn migrate_db(d: &str) -> Result<(), Box<dyn Error>> {
|
||||||
|
debug!("Migration DB for Oolatoocs");
|
||||||
|
|
||||||
|
let conn = Connection::open(d)?;
|
||||||
|
|
||||||
|
let res = conn.execute("SELECT datetime from tweet_to_toot;", []);
|
||||||
|
|
||||||
|
// If the column can be selected then, it’s OK
|
||||||
|
// if not, see if the error is a missing column and add it
|
||||||
|
match res {
|
||||||
|
Err(e) => match e.to_string().as_str() {
|
||||||
|
"no such column: datetime" => migrate_db_alter_table(&conn), //column does not exist
|
||||||
|
"Execute returned results - did you mean to call query?" => Ok(()), // return results,
|
||||||
|
// column does
|
||||||
|
// exist
|
||||||
|
_ => Err(e.into()),
|
||||||
|
},
|
||||||
|
Ok(_) => Ok(()),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Creates a new table, copy the data from the old table and rename it
|
||||||
|
fn migrate_db_alter_table(c: &Connection) -> Result<(), Box<dyn Error>> {
|
||||||
|
// create the new table
|
||||||
|
c.execute(
|
||||||
|
"CREATE TABLE IF NOT EXISTS tweet_to_toot_new (
|
||||||
|
tweet_id INTEGER PRIMARY KEY,
|
||||||
|
toot_id INTEGER,
|
||||||
|
datetime INTEGER DEFAULT CURRENT_TIMESTAMP
|
||||||
|
)",
|
||||||
|
[],
|
||||||
|
)?;
|
||||||
|
|
||||||
|
// copy data from the old table
|
||||||
|
c.execute(
|
||||||
|
"INSERT INTO tweet_to_toot_new (tweet_id, toot_id)
|
||||||
|
SELECT tweet_id, toot_id FROM tweet_to_toot;",
|
||||||
|
[],
|
||||||
|
)?;
|
||||||
|
|
||||||
|
// drop the old table
|
||||||
|
c.execute("DROP TABLE tweet_to_toot;", [])?;
|
||||||
|
|
||||||
|
// rename the new table
|
||||||
|
c.execute("ALTER TABLE tweet_to_toot_new RENAME TO tweet_to_toot;", [])?;
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
mod tests {
|
mod tests {
|
||||||
use super::*;
|
use super::*;
|
||||||
@@ -119,17 +202,25 @@ mod tests {
|
|||||||
let t_in = TweetToToot {
|
let t_in = TweetToToot {
|
||||||
tweet_id: 123456789,
|
tweet_id: 123456789,
|
||||||
toot_id: 987654321,
|
toot_id: 987654321,
|
||||||
|
datetime: None,
|
||||||
};
|
};
|
||||||
|
|
||||||
write_state(&conn, t_in).unwrap();
|
write_state(&conn, t_in).unwrap();
|
||||||
|
|
||||||
let mut stmt = conn.prepare("SELECT * FROM tweet_to_toot;").unwrap();
|
let mut stmt = conn
|
||||||
|
.prepare(
|
||||||
|
"SELECT tweet_id, toot_id, UNIXEPOCH(datetime) AS datetime FROM tweet_to_toot;",
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
let t_out = stmt
|
let t_out = stmt
|
||||||
.query_row([], |row| {
|
.query_row([], |row| {
|
||||||
Ok(TweetToToot {
|
Ok(TweetToToot {
|
||||||
tweet_id: row.get("tweet_id").unwrap(),
|
tweet_id: row.get("tweet_id").unwrap(),
|
||||||
toot_id: row.get("toot_id").unwrap(),
|
toot_id: row.get("toot_id").unwrap(),
|
||||||
|
datetime: Some(
|
||||||
|
DateTime::from_timestamp(row.get("datetime").unwrap(), 0).unwrap(),
|
||||||
|
),
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
.unwrap();
|
.unwrap();
|
||||||
@@ -226,4 +317,180 @@ mod tests {
|
|||||||
assert_eq!(t_out.tweet_id, 100);
|
assert_eq!(t_out.tweet_id, 100);
|
||||||
assert_eq!(t_out.toot_id, 1000);
|
assert_eq!(t_out.toot_id, 1000);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_last_toot_id_read_state() {
|
||||||
|
let d = "/tmp/test_last_toot_id_read_state.sqlite";
|
||||||
|
|
||||||
|
init_db(d).unwrap();
|
||||||
|
|
||||||
|
let conn = Connection::open(d).unwrap();
|
||||||
|
|
||||||
|
conn.execute(
|
||||||
|
"INSERT INTO tweet_to_toot(tweet_id, toot_id)
|
||||||
|
VALUES (100, 1000), (101, 1000);",
|
||||||
|
[],
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let t_out = read_state(&conn, Some(1000)).unwrap().unwrap();
|
||||||
|
|
||||||
|
remove_file(d).unwrap();
|
||||||
|
|
||||||
|
assert_eq!(t_out.tweet_id, 101);
|
||||||
|
assert_eq!(t_out.toot_id, 1000);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_migrate_db_alter_table() {
|
||||||
|
let d = "/tmp/test_migrate_db_alter_table.sqlite";
|
||||||
|
|
||||||
|
let conn = Connection::open(d).unwrap();
|
||||||
|
|
||||||
|
init_db(d).unwrap();
|
||||||
|
|
||||||
|
write_state(
|
||||||
|
&conn,
|
||||||
|
TweetToToot {
|
||||||
|
tweet_id: 0,
|
||||||
|
toot_id: 0,
|
||||||
|
datetime: None,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
write_state(
|
||||||
|
&conn,
|
||||||
|
TweetToToot {
|
||||||
|
tweet_id: 1,
|
||||||
|
toot_id: 1,
|
||||||
|
datetime: None,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
migrate_db_alter_table(&conn).unwrap();
|
||||||
|
|
||||||
|
let mut stmt = conn.prepare("PRAGMA table_info(tweet_to_toot);").unwrap();
|
||||||
|
let mut t = stmt.query([]).unwrap();
|
||||||
|
|
||||||
|
while let Some(row) = t.next().unwrap() {
|
||||||
|
if row.get::<usize, u8>(0).unwrap() == 2 {
|
||||||
|
assert_eq!(row.get::<usize, String>(1).unwrap(), "datetime".to_string());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
remove_file(d).unwrap();
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_migrate_db() {
|
||||||
|
// this should be idempotent
|
||||||
|
let d = "/tmp/test_migrate_db.sqlite";
|
||||||
|
|
||||||
|
let conn = Connection::open(d).unwrap();
|
||||||
|
conn.execute(
|
||||||
|
"CREATE TABLE IF NOT EXISTS tweet_to_toot (
|
||||||
|
tweet_id INTEGER,
|
||||||
|
toot_id INTEGER PRIMARY KEY
|
||||||
|
)",
|
||||||
|
[],
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
conn.execute("INSERT INTO tweet_to_toot VALUES (0, 0), (1, 1);", [])
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
migrate_db(d).unwrap();
|
||||||
|
|
||||||
|
let last_state = read_state(&conn, None).unwrap().unwrap();
|
||||||
|
|
||||||
|
assert_eq!(last_state.tweet_id, 1);
|
||||||
|
assert_eq!(last_state.toot_id, 1);
|
||||||
|
|
||||||
|
migrate_db(d).unwrap(); // shouldn’t do anything
|
||||||
|
|
||||||
|
remove_file(d).unwrap();
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_delete_state() {
|
||||||
|
let d = "/tmp/test_delete_state.sqlite";
|
||||||
|
|
||||||
|
init_db(d).unwrap();
|
||||||
|
|
||||||
|
let conn = Connection::open(d).unwrap();
|
||||||
|
|
||||||
|
conn.execute(
|
||||||
|
"INSERT INTO tweet_to_toot(tweet_id, toot_id) VALUES (0, 0);",
|
||||||
|
[],
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
delete_state(&conn, 0).unwrap();
|
||||||
|
|
||||||
|
let mut stmt = conn
|
||||||
|
.prepare(
|
||||||
|
"SELECT tweet_id, toot_id, UNIXEPOCH(datetime) AS datetime FROM tweet_to_toot;",
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let t_out = stmt.query_row([], |row| {
|
||||||
|
Ok(TweetToToot {
|
||||||
|
tweet_id: row.get("tweet_id").unwrap(),
|
||||||
|
toot_id: row.get("toot_id").unwrap(),
|
||||||
|
datetime: Some(DateTime::from_timestamp(row.get("datetime").unwrap(), 0).unwrap()),
|
||||||
|
})
|
||||||
|
});
|
||||||
|
|
||||||
|
assert!(t_out.is_err_and(|x| x == rusqlite::Error::QueryReturnedNoRows));
|
||||||
|
|
||||||
|
conn.execute(
|
||||||
|
"INSERT INTO tweet_to_toot(tweet_id, toot_id) VALUES(102,42), (103,42);",
|
||||||
|
[],
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
delete_state(&conn, 42).unwrap();
|
||||||
|
|
||||||
|
let mut stmt = conn
|
||||||
|
.prepare(
|
||||||
|
"SELECT tweet_id, toot_id, UNIXEPOCH(datetime) AS datetime FROM tweet_to_toot;",
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let t_out = stmt.query_row([], |row| {
|
||||||
|
Ok(TweetToToot {
|
||||||
|
tweet_id: row.get("tweet_id").unwrap(),
|
||||||
|
toot_id: row.get("toot_id").unwrap(),
|
||||||
|
datetime: Some(DateTime::from_timestamp(row.get("datetime").unwrap(), 0).unwrap()),
|
||||||
|
})
|
||||||
|
});
|
||||||
|
|
||||||
|
assert!(t_out.is_err_and(|x| x == rusqlite::Error::QueryReturnedNoRows));
|
||||||
|
|
||||||
|
remove_file(d).unwrap();
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_read_all_tweet_state() {
|
||||||
|
let d = "/tmp/read_all_tweet_state.sqlite";
|
||||||
|
|
||||||
|
init_db(d).unwrap();
|
||||||
|
|
||||||
|
let conn = Connection::open(d).unwrap();
|
||||||
|
|
||||||
|
conn.execute(
|
||||||
|
"INSERT INTO tweet_to_toot(tweet_id, toot_id) VALUES (102, 42), (103, 42), (105, 43);",
|
||||||
|
[],
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let v1 = read_all_tweet_state(&conn, 43).unwrap();
|
||||||
|
let v2 = read_all_tweet_state(&conn, 42).unwrap();
|
||||||
|
|
||||||
|
assert_eq!(v1, vec![105]);
|
||||||
|
assert_eq!(v2, vec![102, 103]);
|
||||||
|
|
||||||
|
remove_file(d).unwrap();
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
@@ -1,8 +1,12 @@
|
|||||||
use crate::config::TwitterConfig;
|
use crate::config::TwitterConfig;
|
||||||
use crate::error::OolatoocsError;
|
use crate::error::OolatoocsError;
|
||||||
|
use chrono::Utc;
|
||||||
use futures::{stream, StreamExt};
|
use futures::{stream, StreamExt};
|
||||||
use log::{debug, error, warn};
|
use log::{debug, error, warn};
|
||||||
use megalodon::entities::attachment::{Attachment, AttachmentType};
|
use megalodon::entities::{
|
||||||
|
attachment::{Attachment, AttachmentType},
|
||||||
|
Poll,
|
||||||
|
};
|
||||||
use oauth1_request::Token;
|
use oauth1_request::Token;
|
||||||
use reqwest::{
|
use reqwest::{
|
||||||
multipart::{Form, Part},
|
multipart::{Form, Part},
|
||||||
@@ -28,6 +32,8 @@ struct Tweet {
|
|||||||
media: Option<TweetMediasIds>,
|
media: Option<TweetMediasIds>,
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
reply: Option<TweetReply>,
|
reply: Option<TweetReply>,
|
||||||
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
|
poll: Option<TweetPoll>,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Serialize, Debug)]
|
#[derive(Serialize, Debug)]
|
||||||
@@ -40,6 +46,12 @@ struct TweetReply {
|
|||||||
in_reply_to_tweet_id: String,
|
in_reply_to_tweet_id: String,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[derive(Serialize, Debug)]
|
||||||
|
pub struct TweetPoll {
|
||||||
|
pub options: Vec<String>,
|
||||||
|
pub duration_minutes: u16,
|
||||||
|
}
|
||||||
|
|
||||||
#[derive(Deserialize, Debug)]
|
#[derive(Deserialize, Debug)]
|
||||||
struct TweetResponse {
|
struct TweetResponse {
|
||||||
data: TweetResponseData,
|
data: TweetResponseData,
|
||||||
@@ -101,6 +113,35 @@ fn get_token(config: &TwitterConfig) -> Token {
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// This functions deletes a tweet, given its id
|
||||||
|
pub async fn delete_tweet(config: &TwitterConfig, id: u64) -> Result<(), Box<dyn Error>> {
|
||||||
|
debug!("Deleting Tweet {}", id);
|
||||||
|
let empty_request = EmptyRequest {}; // Why? Because fuck you, that’s why!
|
||||||
|
let token = get_token(config);
|
||||||
|
|
||||||
|
let client = Client::new();
|
||||||
|
let res = client
|
||||||
|
.delete(format!("{}/{}", TWITTER_API_TWEET_URL, id))
|
||||||
|
.header(
|
||||||
|
"Authorization",
|
||||||
|
oauth1_request::delete(
|
||||||
|
format!("{}/{}", TWITTER_API_TWEET_URL, id),
|
||||||
|
&empty_request,
|
||||||
|
&token,
|
||||||
|
oauth1_request::HMAC_SHA1,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.send()
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
if !res.status().is_success() {
|
||||||
|
return Err(OolatoocsError::new(&format!("Cannot delete Tweet {}", id)).into());
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// This function generates a media_ids vec to be used by Twitter
|
||||||
pub async fn generate_media_ids(config: &TwitterConfig, media_attach: &[Attachment]) -> Vec<u64> {
|
pub async fn generate_media_ids(config: &TwitterConfig, media_attach: &[Attachment]) -> Vec<u64> {
|
||||||
let mut medias: Vec<u64> = vec![];
|
let mut medias: Vec<u64> = vec![];
|
||||||
|
|
||||||
@@ -416,24 +457,38 @@ async fn upload_chunk_media(
|
|||||||
Ok(orig_media_id.media_id)
|
Ok(orig_media_id.media_id)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pub fn transform_poll(p: &Poll) -> TweetPoll {
|
||||||
|
let poll_end_datetime = p.expires_at.unwrap(); // should be safe at this point
|
||||||
|
let now = Utc::now();
|
||||||
|
let diff = poll_end_datetime.signed_duration_since(now);
|
||||||
|
|
||||||
|
TweetPoll {
|
||||||
|
options: p.options.iter().map(|i| i.title.clone()).collect(),
|
||||||
|
duration_minutes: diff.num_minutes().try_into().unwrap(), // safe here, number is positive
|
||||||
|
// and can’t be over 21600
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/// This posts Tweets with all the associated medias
|
/// This posts Tweets with all the associated medias
|
||||||
pub async fn post_tweet(
|
pub async fn post_tweet(
|
||||||
config: &TwitterConfig,
|
config: &TwitterConfig,
|
||||||
content: &str,
|
content: String,
|
||||||
medias: &[u64],
|
medias: Vec<u64>,
|
||||||
reply_to: &Option<u64>,
|
reply_to: Option<u64>,
|
||||||
|
poll: Option<TweetPoll>,
|
||||||
) -> Result<u64, Box<dyn Error>> {
|
) -> Result<u64, Box<dyn Error>> {
|
||||||
let empty_request = EmptyRequest {}; // Why? Because fuck you, that’s why!
|
let empty_request = EmptyRequest {}; // Why? Because fuck you, that’s why!
|
||||||
let token = get_token(config);
|
let token = get_token(config);
|
||||||
|
|
||||||
let tweet = Tweet {
|
let tweet = Tweet {
|
||||||
text: content.to_string(),
|
text: content,
|
||||||
media: medias.is_empty().not().then(|| TweetMediasIds {
|
media: medias.is_empty().not().then(|| TweetMediasIds {
|
||||||
media_ids: medias.iter().map(|m| m.to_string()).collect(),
|
media_ids: medias.iter().map(|m| m.to_string()).collect(),
|
||||||
}),
|
}),
|
||||||
reply: reply_to.map(|s| TweetReply {
|
reply: reply_to.map(|s| TweetReply {
|
||||||
in_reply_to_tweet_id: s.to_string(),
|
in_reply_to_tweet_id: s.to_string(),
|
||||||
}),
|
}),
|
||||||
|
poll,
|
||||||
};
|
};
|
||||||
|
|
||||||
let client = Client::new();
|
let client = Client::new();
|
||||||
|
10
src/utils.rs
10
src/utils.rs
@@ -33,7 +33,7 @@ pub fn generate_multi_tweets(content: &str) -> Option<(String, String)> {
|
|||||||
fn twitter_count(content: &str) -> usize {
|
fn twitter_count(content: &str) -> usize {
|
||||||
let mut count = 0;
|
let mut count = 0;
|
||||||
|
|
||||||
let split_content = content.split(' ');
|
let split_content = content.split(&[' ', '\n']);
|
||||||
count += split_content.clone().count() - 1; // count the spaces
|
count += split_content.clone().count() - 1; // count the spaces
|
||||||
|
|
||||||
for word in split_content {
|
for word in split_content {
|
||||||
@@ -105,6 +105,14 @@ mod tests {
|
|||||||
let content = "this is the link https://www.google.com/tamerelol/youpi/tonperemdr/tarace.html if you like! What if I shit a final";
|
let content = "this is the link https://www.google.com/tamerelol/youpi/tonperemdr/tarace.html if you like! What if I shit a final";
|
||||||
|
|
||||||
assert_eq!(twitter_count(content), 76);
|
assert_eq!(twitter_count(content), 76);
|
||||||
|
|
||||||
|
let content = "multi ple space";
|
||||||
|
|
||||||
|
assert_eq!(twitter_count(content), content.chars().count());
|
||||||
|
|
||||||
|
let content = "This link is LEEEEET\n\nhttps://www.factornews.com/actualites/ca-sent-le-sapin-pour-free-radical-design-49985.html";
|
||||||
|
|
||||||
|
assert_eq!(twitter_count(content), 45);
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
|
Reference in New Issue
Block a user